Archive for the ‘Privacy’ Category
posted by William McGeveran
The privacy scandal of the week involves Bloomberg terminals, reporters, and Wall Street traders. It started making the rounds of the financial press in the last couple of days and today reached the New York Times, which led its story by declaring that a “shudder went through Wall Street” in response to the revelations. But as with many of the periodic Facebook privacy scandals, this one is only surprising if you haven’t been paying attention. And it distracts the press and the public from more serious matters.
The story, in a nutshell: a Bloomberg terminal like the one in the picture sits on every trading desk. It is the central platform for managing a constant stream of information about market activity, financial news, economic data, and much more. By making this very expensive equipment a necessity, Michael Bloomberg (now New York’s mayor, of course) built a multibillion-dollar empire and made himself fabulously wealthy.
From the beginning, company employees have been able to look up individual Bloomberg subscribers and scrutinize their most recent activity in the system. That may make some sense for sales and technical personnel (although even then it probably ought to have been more anonymized than it seems to have been). Unfortunately, that access also extended to journalists at the many news outlets that have been added to the Bloomberg corporate family over the years. And these reporters appear to have mined that data routinely for tidbits that might have helped with their stories.
Don’t get me wrong, this is not an example of good privacy practices. But it ain’t exactly the allegations of pervasive bribery, eavesdropping, and hacking by journalists in the employ of Rupert Murdoch. Quartz has a pretty good explanation of the data that was available. Primarily, it boils down to the last time a person logged in, the “functions” used (essentially, what general categories of information services were accessed, such as reports of corporate bond trades), and the transcript of any online customer service chats. Crucially, Quartz notes, “Employees can see how many times each function was used but not further details, like which company’s bonds were being researched.” In other words, a lot of it resembles information that many web sites, including news sites, can already glean about most of their customers, particularly those who are logged in. At most, Bloomberg journalists might have obtained some slight lead that would send them on the hunt for more solid information, much as a tip from a source might. In the incident that brought the practice to light, for example, a reporter surmised that a Goldman Sachs partner might have left the firm because he stopped using his Bloomberg terminal.
posted by Ryan Calo
Judge Richard Posner took the occasion of the Boston bombing to remind us of his view that privacy should lose out to other values. Privacy, argues Judge Posner, is largely about concealing truths “that, if known, would make it more difficult for us to achieve our personal goals.” For instance: privacy helps the victims of domestic violence achieve their personal goal of living free from fear; it helps the elderly achieve their personal goal of staying off of marketing “sucker lists;” and it helps children achieve their personal goal of avoiding sexual predators online.
To be fair, Judge Posner acknowledges that some concealment is fine and that privacy laws may even “do some good.” He worries rather about civil libertarians who would limit the expansion of surveillance to the point that we can neither deter, nor apprehend terrorists like the men responsible for bombing the marathon. “There is a tendency to exaggerate the social value of privacy,” Judge Posner believes, and it just might get us killed.
Judge Posner is a founding member of the law and economics movement and, as such, it would seem fair to analyze his claim from the perspective of incentives. Does video surveillance deter crime in general? Empirical evidence suggests that cameras merely displace crime, and Judge Posner concedes that picking terrorists out of a crowd before they act is impracticable. Does video surveillance help with identification? Sure. But the quick identification of the Boston bombers from private footage suggests we have enough surveillance. Moreover, hardened terrorists have proven willing to die in an attack, making identification moot.
Then there are the unintended consequences—a mainstay of economic analyses of the law. The fact that an act of terrorism will be caught on video and spread to every screen in America greatly enhances its intended impact, which in turn makes the option more attractive to our enemies.
One can quibble with my data points. But any honest, empirically-informed cost-benefit analysis of additional surveillance will yield at best a mixed picture. I submit that Judge Posner’s argument yesterday is dead wrong by the terms of the very movement he founded.
posted by Robert Gellman
Privacy advocates have disliked the third-party doctrine at least from the day in 1976 when the Supreme Court decided U.S. v. Miller. Anyone who remembers the Privacy Protection Study Commission knows that its report was heavily influenced by Miller. My first task in my long stint as a congressional staffer was to organize a hearing to receive the report of the Commission in 1977. In the introduction to the report, the Commission called the date of the decision “a fateful day for personal privacy.”
Last year, privacy advocates cheered when Justice Sonia Sotomayor’s concurrence in U.S. v. Jones asked if it was time to reconsider the third-party doctrine. Yet it is likely that it would take a long time before the Supreme Court revisits and overturns the third-party doctrine, if ever. Sotomayor’s opinion didn’t attract a single other Justice.
Can we draft a statute to overturn the third-party doctrine? That is not an easy task, and it may be an unattainable goal politically. Nevertheless, the discussion has to start somewhere. I acknowledge that not everyone wants to overturn Miller. See Orin Kerr’s The Case For the Third-party Doctrine. I’m certainly not the first person to ask the how-to-do-it question. Dan Solove wrestled with the problem in Digital Dossiers and the Dissipation of Fourth Amendment Privacy.
I’m going at the problem as if I were still a congressional staffer tasked with drafting a bill. I see right away that there is precedent. Somewhat remarkably, Congress partly overturned the Miller decision in 1978 when it enacted The Right to Financial Privacy Act, 12 U.S.C. § 3401 et seq. The RFPA says that if the federal government wants to obtain records of a bank customer, it must notify the customer and allow the customer to challenge the request.
The RFPA is remarkable too for its exemptions and weak standards. The law only applies to the federal government and not to state and local governments. (States may have their own laws applicable to state agencies.) Bank supervisory agencies are largely exempt. The IRS is exempt. Disclosures required by federal law are exempt. Disclosures for government loan programs are exempt. Disclosures for grand jury subpoenas are exempt. That effectively exempts a lot of criminal law enforcement activity. Disclosures to GAO and the CFPB are exempt. Disclosures for investigations of crimes against financial institutions by insiders are exempt. Disclosures to intelligence agencies are exempt. This long – and incomplete – list is the first hint that overturning the third-party doctrine won’t be easy.
We’re not done with the weaknesses in the RFPA. A customer who receives notice of a government request has ten days to challenge the request in federal court. The customer must argue that the records sought are not relevant to the legitimate law enforcement inquiry identified by the government in the notice. The customer loses if there is a demonstrable reason to believe that the law enforcement is legitimate and a reasonable belief that the records sought are relevant to that inquiry. Relevance and legitimacy are weak standards, to say the least. Good luck winning your case.
Who should get the protection of our bill? The RFPA gives rights to “customers” of a financial institution. A customer is an individual or partnership of five or fewer individuals (how would anyone know?). If legal persons also receive protection, a bill might actually attract corporate support, along with major opposition from every regulatory agency in town. It will be hard enough to pass a bill limited to individuals. The great advantage of playing staffer is that you can apply political criteria to solve knotty policy problems. I’d be inclined to stick to individuals.
posted by Danielle Citron
As All Things Digital Kara Swisher reports, Living Social experienced a significant hack the other day: over 50 million users’ email, dates of birth, and encrypted passwords were leaked into the hands of Russian hackers (or so it seems). This hack comes on the heels of data breaches at LinkedIn and Zappos. That the passwords were encrypted just means that users better change their passwords and fast because in time the encryption can be broken. A few years ago, I blogged about the 500 million mark of personal data leaked. Hundreds of millions seems like child’s play today.
This raises some important questions about what we mean when we talk about personally identifiable information (PII). Paul Schwartz and my co-blogger Dan Solove have done terrific work helping legislators devise meaningful definitions of PII in a world of reidentification. Paul Ohm is currently working on an important project providing a coherent account of sensitive information in the context of current data protection laws. Is someone’s password and date of birth sensitive information deserving special privacy protection? Beyond the obvious health, credit, and financial information, what other sorts of data do we consider sensitive and why? Answers to these questions are crucial to companies formulating best practices, the FTC as it continues its robust enforcement of privacy promises and pursuing deceptive practices, and legislators considering private sector privacy regulations of data brokers, as in Senator John Rockefeller’s current efforts.
posted by Frank Pasquale
I was recently reading a Money Laundering Threat Assessment (from 2005), and the following lines came up on p. 49:
[T]he trust laws of some jurisdictions have aided money launderers in their use of trusts to conceal identity and to perpetrate fraud. In certain jurisdictions, such as the Cook Islands, Nevis, and Niue, the trust laws no longer require the names of the settlor and the beneficiaries to be placed in the trust deed, permit settlors to retain control over the trust, and allow trusts to be revocable and of unlimited duration.
My question is: why is this even called a trust? Shouldn’t it bear some other name? At least Liechtenstein has the decency to call its creepy money-hiding methods “Anstalts.”
The larger consequences here are terrifying. The wealth defense industry has created an environment where all manner of swindlers, thieves, and terrorists can hide ill-gotten gains. As a forthcoming University of Pennsylvania piece by Shima Baradaran, Michael Findley, Daniel Nelson, and J.C. Sharman puts it:
On the whole, forming an anonymous shell company is as easy as ever, despite increased regulations following 9/11. The results are disconcerting and demonstrate that we are much too far from a world that is safe from terror.
I nevertheless expect that most of the centomillionaire and billionaire class will continue to fight efforts to crack down on shell companies and trusts, and will find ample “help” to argue their case. Perhaps someone will even pen an ode to financial privacy. Meanwhile, we have no idea what taxes may be due from trillions of dollars in offshore wealth, or to what purposes it is directed.
Expect to hear many more stories on this issue. The stakes could not be higher. As Liu Xiaobo has stated, corruption is the “officialization of the criminal and the criminalization of the official.” Persisting even in a world of brutal want and austerity-induced suffering, tax havenry epitomizes that sinister merger.
posted by Frank Pasquale
First Monday recently published an issue on social media monopolies. These lines from the introduction by Korinna Patelis and Pavlos Hatzopolous are particularly provocative:
A large part of existing critical thinking on social media has been obsessed with the concept of privacy. . . . Reading through a number of volumes and texts dedicated to the problematic of privacy in social networking one gets the feeling that if the so called “privacy issues” were resolved social media would be radically democratized. Instead of adopting a static view of the concept . . . of “privacy”, critical thinking needs to investigate how the private/public dichotomy is potentially reconfigured in social media networking, and [the] new forms of collectivity that can emerge . . . .
I can even see a way in which privacy rights do not merely displace, but actively work against, egalitarian objectives. Stipulate a population with Group A, which is relatively prosperous and has the time and money to hire agents to use notice-and-consent privacy provisions to its advantage (i.e., figuring out exactly how to disclose information to put its members in the best light possible). Meanwhile, most of Group B is too busy working several jobs to use contracts, law, or agents to its advantage in that way. We should not be surprised if Group A leverages its mastery of privacy law to enhance its position relative to Group B.
Better regulation would restrict use of data, rather than “empower” users (with vastly different levels of power) to restrict collection of data. As data scientist Cathy O’Neil observes:
Read the rest of this post »
posted by Ryan Calo
As if we don’t have enough to worry about, now there’s spyware for your brain. Or, there could be. Researchers at Oxford, Geneva, and Berkeley have created a proof of concept for using commercially available brain-computer interfaces to discover private facts about today’s gamers. Read the rest of this post »
April 14, 2013 at 12:57 am Posted in: Bioethics, Civil Rights, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Technology, Uncategorized Print This Post One Comment
posted by Taunya Banks
As a follow up to my post last week asking about human dignity, unburied bones and ownership of human cells, here are two related issues that appeared in the Sunday news.
The first item from Sunday’s Baltimore Sun is the belated report of a Reuters story about the controversy over disposition of King Richard III’s newly discovered remains uncovered in a municipal parking lot by the University of Leicester. The long-lost remains of the King, who died in 1485, were exhumed, and the University was given permission to re-inter the remains in Leicester. But the King’s descendants objected claiming that they were not “consulted … over the exhumation and the license allowing the university to re-bury the King, and [that] this failure breached the European Convention on Human Rights.” They want the body buried in York.
The second item is an op-ed by two medical school academics, Jeffrey Rosenfeld and Christopher E. Mason, that appeared in Sunday’s Washington Post about Association for Molecular Pathology et al v. Myriad Genetics, et al, a case that will be argued in the Supreme Court on April 15th. This is important case that has been mentioned on this blog as recently as last February. SCOTUS even featured a symposium spurred by the controversy. At issue is whether, on some level, human genes are patentable. Rosenfeld and Mason oppose patenting DNA. On the other hand, much like the researchers discussing the HeLa cell, the respondents, Myriad Genetics, et al, argue that the issue is much narrower, namely whether the “human” aspect of the specific sequence of isolated human DNA is the result of the efforts of the respondent, and thus patentable. Read the rest of this post »
Bartelt’s Dog and the Continuing Vitality of the Supreme Court’s Tacit Distinction between Sense Enhancement and Sense Creation
posted by Albert Wong
Last Term, in an amicus brief in United States v. Jones, 565 U.S. __, several colleagues and I highlighted the Supreme Court’s long, albeit not always clearly stated, history of distinguishing between sense-enhancing and sense-creating technologies for Fourth Amendment purposes. As a practical matter, the Court has consistently subjected technologies in the latter category to closer scrutiny than technologies that merely bolster natural human senses. Thus, the use of searchlights, field glasses, and (to some extent) beepers and airplane-mounted cameras was not found to implicate the Fourth Amendment. As the Court explained, “[n]othing in the Fourth Amendment prohibit[s] the police from augmenting the sensory faculties bestowed upon them at birth with such enhancement as science and technology” may afford. 460 U.S. at 282 (emphasis added). In contrast, the Court has held that technologies that create a new capacity altogether, including movie projectors, wiretaps, ultrasound devices, radar flashlights, directional microphones, thermal imagers, and (as of Jones) GPS tracking devices, do trigger the Fourth Amendment. To hold otherwise, as the Court has stated, would “shrink the realm of guaranteed privacy,” leaving citizens “at the mercy of advancing technology.” 533 U.S. at 34-36.
In fact, of the landmark cases involving technology and the Fourth Amendment during the past 85 years (from United States v. Lee, 274 U.S. 559, in 1927 to Jones in 2012), only in one instance did the Supreme Court appear to deviate from this distinction between sense enhancement and sense creation. In that case, United States v. Place, 462 U.S. 696, and its successors, City of Indianapolis v. Edmond, 531 U.S. 32, and Illinois v. Caballes, 543 U.S. 405, the Court held that the use of trained narcotics-detection dogs (more apparently similar to using a new capacity than merely enhancing a natural human sense) did not implicate the Fourth Amendment. In our amicus brief in Jones, we rationalized Place, Edmond, and Caballes by arguing that dogs were unique, being natural biological creatures that had long been used by the police, even in the time of the Framers. Further, we argued, a canine sniff, unlike the use of, say, a wiretap or a thermal imager, “discloses only the presence or absence of narcotics, a contraband item.” 462 U.S. at 707 (emphasis added). Still, the apparent ‘dog exception’ was rankling. Read the rest of this post »
March 31, 2013 at 11:35 am Posted in: Anonymity, Constitutional Law, Privacy, Privacy (Electronic Surveillance), Privacy (Law Enforcement), Supreme Court, Technology, Uncategorized Print This Post 14 Comments
posted by Ryan Calo
Amidst all of the discussion of gay marriage at One First Street NW today, you may have missed that the Supreme Court decided Florida v. Jardines. In a five-four opinion by Justice Scalia, the Court held that bringing a police dog within the curtilage (in this case, the front porch) of the home to sniff for drugs constitutes a search for purposes of the Fourth Amendment. As Orin Kerr predicted, the opinion turned on the lack of implied consent to approach with a dog, which converted the detectives’ action into a trespass. Justices Thomas, Ginsburg, Sotomayor, and Kagan joined Justice Scalia’s opinion. Justice Alito wrote for the dissent, joined by Justices Kennedy, Breyer, and the Chief Justice. Justice Kagan, joined by Justices Ginsburg and Sotomayor, wrote separately to note that they “could just as happily have decided [the case] by looking to Jardines’ privacy interests.” Read the rest of this post »
posted by Frank Pasquale
An emerging, “solutionist” narrative about drones goes something like this:
Yes, we should be very worried about government misuse of drones at home and abroad. But the answer is not to ban, or even blame, the technology itself. Rather, we need to spread the technology among more people. Worried that the government will spy on you? Get your own drones to watch the watchers. Fearful of malevolent drones? Develop your own protective force. The answer is more technology, not regulation of particular technologies.
I’d like to believe that’s true, if only because technology develops so quickly, and government seems paralyzed by comparison. But I think it’s a naive position. It manages to understate both the threats posed by drones, and the governance challenges they precipitate.
Read the rest of this post »
posted by Ryan Calo
I got the chance to testify at a hearing of the full Senate Judiciary Committee about the domestic use of drones yesterday. The New York Times has this coverage and, for aficionados of torts, I talk about intrusion upon seclusion with Senator Dick Durbin in this clip from NBC News. Should you get a chance to watch the hearing in full, Senator Al Franken’s thoughts at the end were particularly vivid. My written and oral comments were similar to those outlined in my previous post: privacy law places few limits on the use of drones for surveillance, but we should be very careful in crafting any drone-specific legislative response. It happens that, about when I was testifying, my students were taking a final where one of the questions involved a drone filming a private party. I feel they had fair notice that this might be on the exam.
posted by Deven Desai
Just as Neil Richards’s The Perils of Social Reading (101 Georgetown Law Journal 689 (2013)) is out in final form, Netflix released its new social sharing features in partnership with that privacy protector, Facebook. Not that working with Google, Apple, or Microsoft would be much better. There may be things I am missing. But I don’t see how turning on this feature is wise given that it seems to require you to remember not to share in ways that make sharing a bit leakier than you may want.
Apparently one has to connect your Netflix account to Facebook to get the feature to work. The way it works after that link is made poses problems.
According to SlashGear two rows appear. One is called Friends’ Favorites tells you just that. Now, consider that the algorithm works in part by you rating movies. So if you want to signal that odd documentaries, disturbing art movies, guilty pleasures (this one may range from The Hangover to Twilight), are of interest, you should rate them highly. If you turn this on, are all old ratings shared? And cool! Now everyone knows that you think March of the Penguins and Die Hard are 5 stars. The other button:
is called “Watched By Your Friends,” and it consists of movies and shows that your friends have recently watched. It provides a list of all your Facebook friends who are on Netflix, and you can cycle through individual friends to see what they recently watched. This is an unfiltered list, meaning that it shows all the movies and TV shows that your friends have agreed to share.
Of course, you can control what you share and what you don’t want to share, so if there’s a movie or TV show that you watch, but you don’t want to share it with your friends, you can simply click on the “Don’t Share This” button under each item. Netflix is rolling out the feature over the next couple of days, and the company says that all US members will have access to Netflix social by the end of the week.
Right. So imagine you forget that your viewing habits are broadcast. And what about Roku or other streaming devices? How does one ensure that the “Don’t Share” button is used before the word goes out that you watched one, two, or three movies on drugs, sex, gay culture, how great guns are, etc.?
As Richards puts it, “the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate
our intellectual privacy—the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition.” So too for video and really any information consumption.
posted by Danielle Citron
Privacy leading lights Dan Solove and Paul Schwartz have recently released the 2013 edition of Privacy Law Fundamentals, a must-have for privacy practitioners, scholars, students, and really anyone who cares about privacy.
Privacy Law Fundamentals is an essential primer of the state of privacy law, capturing the up-to-date developments in legislation, FTC enforcement actions, and cases here and abroad. As Chief Privacy Officers like Intel’s David Hoffman and renown privacy practitioners like Hogan’s Chris Wolf and Covington’s Kurt Wimmer agree, Privacy Law Fundamentals is an “essential” and “authoritative guide” on privacy law, compact and incredibly useful. For those of you who know Dan and Paul, their work is not only incredibly wise and helpful but also dispensed in person with serious humor. Check out this You Tube video, “Privacy Law in 60 Seconds,” to see what I mean. I think that Psy may have a run for his money on making us smile.
March 8, 2013 at 8:42 am Posted in: Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical), Privacy (National Security) Print This Post 4 Comments
posted by Danielle Citron
Privacy leading light Alan Westin passed away this week. Almost fifty years ago, Westin started his trailblazing work helping us understand the dangers of surveillance technologies. Building on the work that Warren and Brandeis started in “The Right to Privacy” in 1898, Westin published Privacy and Freedom in 1967. A year later, he took his normative case for privacy to the trenches. As Director of the National Academy of Science’s Computer Science and Engineering Board, he and a team of researchers studied governmental, commercial, and private organizations using databases to amass, use, and share personal information. Westin’s team interviewed 55 organizations, from local law enforcement, federal agencies like the Social Security Administration, and direct-mail companies like R.L. Polk (a predecessor to our behavioral advertising industry).
The 1972 report, Databanks in a Free Society: Computers, Record-Keeping, and Privacy, is a masterpiece. With 14 case studies, the report made clear the extent to which public and private entities had been building substantial computerized dossiers of people’s activities and the risks to economic livelihood, reputation, and self-determination. It demonstrated the unrestrained nature of data collection and sharing, with driver’s license bureaus selling personal information to direct-mail companies and law enforcement sharing arrest records with local and state agencies for employment and licensing matters. Surely influenced by Westin’s earlier work, some data collectors, like the Kansas City Police Department, talked to the team about privacy protections, suggesting the need for verification of source documents, audit logs, passwords, and discipline for improper use of data. Westin’s report called for data collectors to adopt ethical procedures for data collection and sharing, including procedural protections such as notice and chance to correct inaccurate or incomplete information, data minimization requirements, and sharing limits.
Westin’s work shaped the debate about the right to privacy at the dawn of our surveillance era. His changing making agenda was front and center of the Privacy Act of 1974. In the early 1970s, nearly fifty congressional hearings and reports investigated a range of data privacy issues, including the use of census records, access to criminal history records, employers’ use of lie detector tests, and the military and law enforcement’s monitoring of political dissidents. State and federal executives spearheaded investigations of surveillance technologies including a proposed National Databank Center.
Just as public discourse was consumed with the “data-bank problem,” the courts began to pay attention. In Whalen v. Roe, a 1977 case involving New York’s mandatory collection of prescription drug records, the Supreme Court strongly suggested that the Constitution contains a right to information privacy based on substantive due process. Although it held that the state prescription drug database did not violate the constitutional right to information privacy because it was adequately secured, the Court recognized an individual’s interest in avoiding disclosure of certain kinds of personal information. Writing for the Court, Justice Stevens noted the “threat to privacy implicit in the accumulation of vast amounts of personal information in computerized data banks or other massive government files.” In a concurring opinion, Justice Brennan warned that the “central storage and easy accessibility of computerized data vastly increase the potential for abuse of that information, and I am not prepared to say that future developments will not demonstrate the necessity of some curb on such technology.”
What Westin underscored so long ago, and what Whalen v. Roe signaled, technologies used for broad, indiscriminate, and intrusive public surveillance threaten liberty interests. Last term, in United States v. Jones, the Supreme Court signaled that these concerns have Fourth Amendment salience. Concurring opinions indicate that at least five justices have serious Fourth Amendment concerns about law enforcement’s growing surveillance capabilities. Those justices insisted that citizens have reasonable expectations of privacy in substantial quantities of personal information. In our article “The Right to Quantitative Privacy,” David Gray and I are seeking to carry forward Westin’s insights (and those of Brandeis and Warren before him) into the Fourth Amendment arena as the five concurring justices in Jones suggested. More on that to come, but for now, let’s thank Alan Westin for his extraordinary work on the “computerized databanks” problem.
February 24, 2013 at 10:18 am Posted in: Criminal Procedure, Current Events, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement) Print This Post 4 Comments
posted by Mary Anne Franks
As promised in the comments section of my last post, I offer in this post the outline of my proposal to effectively combat revenge porn. A few preliminary notes: one, this is very much a work in progress as well as being my first foray into drafting legislative language of any kind. Two, a note about terminology: while “revenge porn” is an attention-grabbing term, it is imprecise and potentially misleading. The best I have come up with as a replacement is “non-consensual pornography,” so that is the term I will use throughout this post. I would be interested to hear suggestions for a better term as well as any other constructive thoughts and feedback.
I want to emphasize at the outset that the problem of non-consensual pornography is not limited to the scenarios that receive the most media attention, that is, when A gives B (often an intimate partner) an intimate photo that B distributes without A’s consent. Non-consensual pornography includes the recording and broadcasting of a sexual assault for prurient purposes and distributing sexually graphic images obtained through hacking or other illicit means. Whatever one’s views on pornography more broadly, it should be a non-controversial proposition that pornography must at a minimum be restricted to individuals who are (1. adults and (2. consenting. Federal and state laws take the first very seriously; it is time they took consent requirements seriously as well.
Before I offer my proposal for what a federal criminal prohibition of non-consensual pornography could look like, I want to explain why looking to federal criminal law is the most appropriate and effective response to the problem. In doing so, I do not mean to suggest that other avenues are illegitimate or ill-advised. I support the use of existing laws or other reform proposals to the extent that they are able to deter non-consensual pornography or provide assistance to victims. That being said, here is my case for why federal criminal law is the best way to address non-consensual pornography, in Q&A form.
posted by Danielle Citron
The ethos of our age is the more data, the better, and nowhere is that more true than the data-broker industry. Data-broker databases contain dossiers on hundreds of millions of individuals, including their Social Security numbers, property records, criminal-justice records, car rentals, credit reports, postal and shipping records, utility bills, gaming, insurance claims, divorce records, social network profiles, online activity, and drug- and food-store records. According to FTC Chairman Jon Leibowitz, companies like Acxiom are the ‘invisible cyberazzi’ that follow us around every where we go on- and offline, or as Chris Hoofnagle has aptly called them “Little Brothers” helping Big Brother and industry. Data brokers are largely unbridled by regulation. The FTC’s enforcement authority over data brokers stems from the Fair Credit Reporting Act (FCRA), which was passed in 1970 to protect the privacy and accuracy of information included in credit reports. FCRA requires consumer reporting agencies to use reasonable procedures to ensure that entities to which they disclose sensitive consumer data have a permissible purpose for receiving that data. Under FCRA, employers are required to inform individuals about intended adverse actions against them based on their credit reports. Individuals get a chance to explain inaccurate or incomplete information and to contact credit-reporting agencies to dispute the information in the hopes of getting it corrected. During the past two years, the FTC has gone after social media intelligence company and online people search engine on the grounds that they constituted consumer reporting agencies subject to FCRA. In June 2012, the FTC settled charges against Spokeo, an online service that compiles and sells digital dossiers on consumers to human resource professionals, job recruiters, and other businesses. Spokeo assembles consumer data from on- and offline sources, including social media sites, to create searchable consumer profiles. The profiles include an individual’s full name, physical address, phone number, age range, and email address, hobbies, photos, ethnicity, religion, and social network activity. The FTC alleged that Spokeo failed to adhere to FCRA, including its obligation to ensure the accuracy of consumer reports. Ultimately, it obtained a $800,000 settlement with the company. That’s helpful, to be sure, but given the FTC’s limited resources may not lead to more accurate dossiers. (It also may mean that employers will keep online intelligence in-house and thus their use of unreliable online information outside the reach of FCRA, as my co-blogger Frank Pasquale wrote so ably about in The Offensive Internet: Speech, Privacy, and Reputation). More recently,the FTC issued orders requiring nine data brokerage companies to provide the agency with information about how they collect and use data about consumers. The agency will use the information to study privacy practices in the data broker industry. The nine data brokers receiving orders from the FTC were (1) Acxiom, (2) Corelogic, (3) Datalogix, (4) eBureau, (5) ID Analytics, (6) Intelius, (7) Peekyou, (8) Rapleaf, and (9) Recorded Future. In its press release, the FTC explained that it is seeking details about: “the nature and sources of the consumer information the data brokers collect; how they use, maintain, and disseminate the information; and the extent to which the data brokers allow consumers to access and correct their information or to opt out of having their personal information sold.” The FTC called on the data broker industry to improve the transparency of its practices as part of a Commission report, Protecting Consumer Privacy in an Era of Rapid Change: Recommendations for Businesses and Policymakers. FTC Commissioner Julie Brill has been a tireless advocate for greater oversight over data brokers–here is hoping that her efforts and those of her agency produce important reforms.
posted by Mary Anne Franks
It would be one thing if the only people defending the practice of non-consensual sexual activity were the easily identifiable misogynists, the ones who always come crawling out of the gutters to inject their poorly spelled and exclamation-point-filled victim-blaming screeds into any discussion of rape, sexual harassment, or gender inequality. But the victim-blaming rhetoric that has surfaced in the conversation about revenge porn is also coming from seemingly reasonable people – people who think deeply about other social and legal issues and who even seem to have some sympathy for the victims.
Let me take as one example a recent post in Forbes by someone I respect, Professor Eric Goldman. The post is titled “What Should We Do About Revenge Porn Sites Like Texxxan?” and the answer, apparently, is nothing. Prof. Goldman characterizes revenge porn as “distasteful,” likens it to the “bad etiquette” of checking out the price of a colleague’s home on Zillow, and concludes with this recommendation: “for individuals who would prefer not to be a revenge porn victim or otherwise have intimate depictions of themselves publicly disclosed, the advice is simple: don’t take nude photos or videos.”
The first thing that strikes me about Prof. Goldman’s discussion of revenge porn (and this is true of many discussions of the issue) is the failure to note its gendered dimensions. This is despite the fact that empirical evidence so far indicates that revenge porn is primarily produced and consumed by men and primarily targets women. Revenge porn belongs to that class of activities that includes rape, domestic violence, and sexual harassment – that is, the class of activities overwhelmingly (though of course not solely) perpetrated by men and directed overwhelmingly (again, not solely) at women. Like those activities, one major effect of revenge porn is to limit women’s freedom to live their lives: it punishes women and girls for engaging in activities that their male counterparts regularly undertake with minimal negative (and often positive) consequences. Read the rest of this post »
posted by Danielle Citron
My recent post offered a potential amendment to Section 230 of the CDA that would exempt from the safe harbor operators whose sites are primarily designed to host illegal activity. Even without such legal change, cyber cesspool operators could face criminal liability if prosecutors took matters seriously. Section 230 does not provide a safe harbor to federal criminal charges. Consider revenge porn operator Hunter Moore’s statement to the press (Forbes’s Kashmir Hill and Betabeat’s Jessica Roy) that, on his new site, he will overlay maps of individuals’ homes next to their naked pictures and social media accounts (if he does not like them). If Moore is serious, he might open himself up to criminal charges of aiding and abetting cyber stalking. Congress, in its 2006 reauthorization of the Violence Against Women Act (VAWA), banned the use of any “interactive computer service” to engage in a “course of conduct” that places a person in another state in reasonable fear of serious bodily injury or death or that is intended to cause, and causes, a victim to suffer substantial emotional distress. 18 U.S.C.A. 2261A(2) (2012). As the Executive Director of the National Center for Victims of Crime explained in congressional testimony:
[S]talkers are using very sophisticated technology . . . —installing spyware on your computer so they can track all of your interactions on the Internet, your purchases, your e-mails and so forth, and using that against you, forwarding e-mails to people at your job, broadcasting your whereabouts, your purchases, your reading habits and so on, or installing GPS in your car so that you will show up at the grocery store, at your local church, wherever and there is the stalker and you can’t imagine how the stalker knew that you were going to be there. . . . this legislation amends the statute so that prosecutors have more effective tools, I think, to address technology through VAWA.
Congress ought to consider passing laws that criminalize the operation of sites designed to facilitate the posting of nude photographs without subjects’ consent, along the lines of state invasion of privacy laws. States like New Jersey prohibit the posting of someone’s nude or partially nude images without his or her consent if the images were recorded in a place where a reasonable person would enjoy an expectation of privacy. The Senate Judiciary Committee recently approved a bill that makes it a crime to make an online app whose primary use is to facilitate cyber stalking. The next important step is to criminalize sites doing the same.
Of course, laws will have limited coercive and expressive impact if they are never enforced. As the group End Revenge Porn rightly notes, “State police argue that the crime is occurring on the internet, which therefore crosses state lines and is out of their jurisdiction. The FBI claim that these cases are civil and/or do not threaten national security and should therefore should be handled solely by lawyers.” Changing those social attitudes and legal solutions are key. Advocacy groups like Without My Consent , lawyers, law professors like Mary Anne Franks, see here, Ann Bartow, see here, and Derek Bambauer, see here, activists like Jill Filipovic and Charlotte Laws, and most recently victims behind Women Against Revenge Porn and End Revenge Porn are working hard on this score. One might say that their work is part of an emerging cyber civil rights movement. (Check out Professor Franks’s important commentary about revenge porn on HuffPo Live). Lucky for us at CoOp, Professor Franks will be joining us next month as a guest blogger. I will be working hard to finish my book Hate 3.0: The Rise of Discriminatory Online Harassment and How to Stop It (forthcoming Harvard University Press) and working with Professor Franks on non-consensual pornography, so more to come.
posted by Danielle Citron
Identity theft, now so common, we can joke about it.
Or as Alan Alda’s character in Woody Allen’s Crimes and Misdemeanors says, “comedy is tragedy plus time.” Time to transform tragedy into comedy, indeed. Scanning the Privacy Rights Clearinghouse database demonstrates that reported data breaches are a daily occurrence. Since January 1, 2013, private and public entities have reported over 20 major data breaches. Included on the list were hospitals, universities, and businesses. Sometimes, the most vulnerable are targeted. For instance, on January 8, 2013, a dishonest employee of the Texas Department of Health and Human Services was arrested on suspicion on misusing client information to apply for credit cards and to receive medical care under their names. Bad enough that automated systems erroneously take recipients of public benefits off the rolls, as my work on Technological Due Process explores. Those designed to help them are destroying their medical and credit histories as well.
We have had over 600 million records breached since 2005, from approximately 3,500 reported data breaches. Of course, those figures represented those officially reported, likely due to state data breach laws, whose requirements vary and leave lots of discretion with regard to reporting up to the entities who have little incentive to err on the side of reporting if they are not legally required to do so. So the bad news is that identity theft is prevalent, but at least we can laugh about it.