posted by Zvi Triger
The thought of hiring a private detective in this age of relatively accessible electronic surveillance seems a bit retro, like a black-and-white scene from a smoky film noire. But it has been enjoying a surprising comeback in recent years, with parents who hire private investigators to spy on their children.
In an article titled Over-Parenting, my co-author Gaia Bernstein and I identified a trend of legal adoption of intensive parenting norms. We cautioned against society legally sanctioning a single parenting style – namely, intensive parenting – while deeming potentially neglectful other parenting styles which could be perfectly legitimate. We also pointed out that involved parenting is class-biased, since it is costly, and not all parents can afford the technology that would enable them to be intensive parents, such as purchasing GPS enabled smartphones for their kids. We argued that when intensive parenting is used for children who do not need it, it becomes over-parenting. Not all children need the same level of involvement in their lives; one of the most important roles of parents is to prepare their children for independent life, and over-parenting might thwart that role. Finally, we speculated that the cultural model for intensive parenting originates in media depictions of upper-middle class families, and that how these families are portrayed in movies and TV shows influences real-life parents.
Well, I’m sad to report that over-parenting is not a unique American phenomenon. Last year, for example, a Chinese newspaper reported that parents in china are increasingly becoming more involved in their children’s lives by hiring private investigators to check whether the children use drugs, drink alcohol or have sex. In Israel some parents are doing the same, especially during the long summer break, during which bored teenagers, many parent fear, are prone to engage in such activities (if you read Hebrew, you can read the story here). I am sure that some American parents do the same.
Leaving aside the class question (are parents who cannot afford a private eye neglectful?), what does this say about parents’ role as educators? Or about the level of trust (or distrust) between those parents and their children? It used to be that a spouse would hire a private investigator because they thought that their partner was having an affair. Nowadays, a growing chunk of a private investigator’s work involved parents spying on their children. Can’t we say that the fact that parents feel that they need to spy on their children already testifies to their limited parental skills?
August 29, 2013 at 5:05 pm Tags: comparative law, intensive parenting, law & technology, over-parenting, Privacy, private detectives Posted in: Culture, Family Law, Privacy, Privacy (Electronic Surveillance) Print This Post No Comments
posted by Omer Tene
In Europe, privacy is considered a fundamental human right. Section 8 of the European Convention of Human Rights (ECHR) limits the power of the state to interfere in citizens’ privacy, ”except such as is in accordance with the law and is necessary in a democratic society”. Privacy is also granted constitutional protection in the Fourth Amendment to the United States Constitution. Both the ECHR and the US Constitution establish the right to privacy as freedom from government surveillance (I’ll call this “constitutional privacy”). Over the past 40 years, a specific framework has emerged to protect informational privacy (see here and here and here and here); yet this framework (“information privacy”) provides little protection against surveillance by either government or private sector organizations. Indeed, the information privacy framework presumes that a data controller (i.e., a government or business organization collecting, storing and using personal data) is a trusted party, essentially acting as a steward of individual rights. In doing so, it overlooks the fact that organizations often have strong incentives to subject individuals to persistent surveillance; to monetize individuals’ data; and to maximize information collection, storage and use.
October 8, 2012 at 2:36 am Tags: data protection, PETs, Privacy, surveillance, third party doctrine Posted in: Consumer Protection Law, Cyberlaw, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Technology, Uncategorized Print This Post 6 Comments
posted by Omer Tene
Last week I blogged here about a comprehensive survey on systematic government access to private sector data, which will be published in the next issue of International Data Privacy Law, an Oxford University Press law journal edited by Christopher Kuner. Several readers have asked whether the results of the survey are available online. Well, now they are – even before publication of the special issue. The project, which was organized by Fred Cate and Jim Dempsey and supported by The Privacy Projects, covered government access laws in Australia, Canada, China, Germany, Israel, Japan, United Kingdom and United States.
Peter Swire’s thought provoking piece on the increased importance of government access to the cloud in an age of encrypted communications appears here. Also see the special issue’s editorial, by Fred, Jim and Ira Rubinstein.
October 2, 2012 at 2:04 am Tags: cloud computing, data protection, Fourth Amendment, government access, Privacy Posted in: Constitutional Law, Consumer Protection Law, Cyberlaw, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement), Privacy (National Security), Uncategorized Print This Post No Comments
posted by Omer Tene
The Sixth Circuit Court of Appeals has recently decided in United States v. Skinner that police does not need a warrant to obtain GPS location data for mobile phones. The decision, based on the holding of the Supreme Court in US v. Jones, highlights the need for a comprehensive reform of rules on government access to communications non-contents information (“communications data”). Once consisting of only a list of phone numbers dialed by a customer (a “pen register”), communications data have become rife with personal information, including location, clickstream, social contacts and more.
To a non-American, the US v. Jones ruling is truly astounding in its narrow scope. Clearly, the Justices aimed to sidestep the obvious question of expectation of privacy in public spaces. The Court did hold that the attachment of a GPS tracking device to a vehicle and its use to monitor the vehicle’s movements constitutes a Fourth Amendment “search”. But it based its holding not on the persistent surveillance of the suspect’s movements but rather on a “trespass to chattels” inflicted when a government agent ever-so-slightly touched the suspect’s vehicle to attach the tracking device. In the opinion of the Court, it was the clearly insignificant “occupation of property” (touching a car!) rather than the obviously weighty location tracking that triggered constitutional protection.
Suffice it to say, that to an outside observer, the property infringement appears to have been a side issue in both Jones and Skinner. The main issue of course is government power to remotely access information about an individual’s life, which is increasingly stored by third parties in the cloud. In most cases past – and certainly present and future – there is little need to trespass on an individual’s property in order to monitor her every move. Our lives are increasingly mediated by technology. Numerous third parties possess volumes of information about our finances, health, online endeavors, geographical movements, etc. For effective surveillance, the government typically just needs to ask.
This is why an upcoming issue of International Data Privacy Law (IDPL) (an Oxford University Press law journal), which is devoted to systematic government access to private sector data, is so timely and important. The special issue covers rules on government access in multiple jurisdictions, including the US, UK, Germany, Israel, Japan, China, India, Australia and Canada.
September 29, 2012 at 4:34 am Tags: cloud computing, data protection, law enforcement, national security, Privacy Posted in: Constitutional Law, Consumer Protection Law, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement), Privacy (National Security), Uncategorized Print This Post 2 Comments
posted by Omer Tene
Much has been written over the past couple of years about “big data” (See, for example, here and here and here). In a new article, Big Data for All: Privacy and User Control in the Age of Analytics, which will be published in the Northwestern Journal of Technology and Intellectual Property, Jules Polonetsky and I try to reconcile the inherent tension between big data business models and individual privacy rights. We argue that going forward, organizations should provide individuals with practical, easy to use access to their information, so they can become active participants in the data economy. In addition, organizations should be required to be transparent about the decisional criteria underlying their data processing activities.
The term “big data” refers to advances in data mining and the massive increase in computing power and data storage capacity, which have expanded by orders of magnitude the scope of information available for organizations. Data are now available for analysis in raw form, escaping the confines of structured databases and enhancing researchers’ abilities to identify correlations and conceive of new, unanticipated uses for existing information. In addition, the increasing number of people, devices, and sensors that are now connected by digital networks has revolutionized the ability to generate, communicate, share, and access data.
Data creates enormous value for the world economy, driving innovation, productivity, efficiency and growth. In the article, we flesh out some compelling use cases for big data analysis. Consider, for example, a group of medical researchers who were able to parse out a harmful side effect of a combination of medications, which were used daily by millions of Americans, by analyzing massive amounts of online search queries. Or scientists who analyze mobile phone communications to better understand the needs of people who live in settlements or slums in developing countries.
September 20, 2012 at 4:28 am Tags: analytics, big data, data protection, Privacy Posted in: Consumer Protection Law, Cyberlaw, Privacy, Privacy (Consumer Privacy), Privacy (Medical), Technology, Uncategorized Print This Post 3 Comments
posted by Omer Tene
Michael Birnhack, a professor at Tel Aviv University Faculty of Law, is one of the leading thinkers about privacy and data protection today (for some of his previous work see here and here and here; he’s also written a deep, thoughtful, innovative book in Hebrew about the theory of privacy. See here). In a new article, Reverse Engineering Informational Privacy Law, which is about to be published in the Yale Journal of Law & Technology, Birnhack sets out to unearth the technological underpinnings of the EU Data Protection Directive (DPD). The DPD, enacted in 1995 and currently undergoing a process of thorough review, is surely the most influential legal instrument concerning data privacy all over the world. It has been heralded by proponents as “technology neutral” – a recipe for longevity in a world marked by rapid technological change. Alas, Birnhack unveils the highly technology-specific fundamentals of the DPD, thereby putting into doubt its continued relevance.
The first part of Birnhack’s article analyzes what technological neutrality of a legal framework means and why it’s a sought after trait. He posits that the idea behind it is simple: “the law should not name, specify or describe a particular technology, but rather speak in broader terms that can encompass more than one technology and hopefully, would cover future technologies that are not yet known at the time of legislation.” One big advantage is flexibility (the law can apply to a broad, continuously shifting set of technologies); consider the continued viability of the tech-neutral Fourth Amendment versus the obviously archaic nature of the tech-specific ECPA . Another advantage is the promotion of innovation; tech-specific legislation can lock-in a specific technology thereby stifling innovation.
Birnhack continues by creating a typology of tech-related legislation. He examines factors such as whether the law regulates technology as a means or as an end; whether it actively promotes, passively permits or directly restricts technology; at which level of abstraction it relates to technology; and who is put in charge of regulation. Throughout the discussion, Birnhack’s broad, rich expertise in everything law and technology is evident; his examples range from copyright and patent law to nuclear non-proliferation.
posted by Omer Tene
One of the most significant developments for privacy law over the past few years has been the rapid erosion of privacy in public. As recently as a decade ago, we benefitted from a fair degree of de facto privacy when walking the streets of a city or navigating a shopping mall. To be sure, we were in plain sight; someone could have seen and followed us; and we would certainly be noticed if we took off our clothes. After all, a public space was always less private than a home. Yet with the notable exception of celebrities, we would have generally benefitted from a fair degree of anonymity or obscurity. A great deal of effort, such as surveillance by a private investigator or team of FBI agents, was required to reverse that. [This, by the way, isn’t a post about US v. Jones, which I will write about later].
Now, with mobile tracking devices always on in our pockets; with GPS enabled cars; surveillance cameras linked to facial recognition technologies; smart signage (billboards that target passersby based on their gender, age, or eventually identity); and devices with embedded RFID chips – privacy in public is becoming a remnant of the past.
Location tracking is already a powerful tool in the hands of both law enforcement and private businesses, offering a wide array of localized services from restaurant recommendations to traffic reports. Ambient social location apps, such as Glancee and Banjo, are increasingly popular, creating social contexts based on users’ location and enabling users to meet and interact.
Facial recognition is becoming more prevalent. This technology too can be used by law enforcement for surveillance or by businesses to analyze certain characteristics of their customers, such as their age, gender or mood (facial detection) or downright identify them (facial recognition). One such service, which was recently tested, allows individuals to check-in to a location on Facebook through facial scanning.
Essentially, our face is becoming equivalent to a cookie, the ubiquitous online tracking device. Yet unlike cookies, faces are difficult to erase. And while cellular phones could in theory be left at home, we very rarely travel without them. How will individuals react to a world in which all traces of privacy in public are lost?
September 1, 2012 at 4:07 am Tags: anti-mask laws, data protection, facial recognition, Privacy, US v. Jones Posted in: Privacy, Privacy (Consumer Privacy), Privacy (Law Enforcement), Uncategorized Print This Post No Comments
posted by Omer Tene
Photo: Like it’s namesake, the European Data Protection Directive (“DPD”), this Mercedes is old, German-designed, clunky and noisy – yet effective. [Photo: Omer Tene]
Old habits die hard. Policymakers on both sides of the Atlantic are engaged in a Herculean effort to reform their respective privacy frameworks. While progress has been and will continue to be made for the next year or so, there is cause for concern that at the end of the day, in the words of the prophet, “there is no new thing under the sun” (Ecclesiastes 1:9).
The United States: Self Regulation
The United States legal framework has traditionally been a quiltwork of legislative patches covering specific sectors, such as health, financial, and children’s data. Significantly, information about individuals’ shopping habits and, more importantly, online and mobile browsing, location and social activities, has remained largely unregulated (see overview in my article with Jules Polonetsky, To Track or “Do Not Track”: Advancing Transparency and Individual Control in Online Behavioral Advertising). While increasingly crafty and proactive in its role as a privacy enforcer, the FTC has had to rely on the slimmest of legislative mandates, Section 5 of the FTC Act, which prohibits ‘‘unfair or deceptive acts or practices”.
To be sure, the FTC has had impressive achievements; reaching consent decrees with Google and Facebook, both of which include 20-year privacy audits; launching a serious discussion of a “do-not-track” mechanism; establishing a global network of enforcement agencies; and more. However, there is a limit as to the mileage that the FTC can squeeze out of its opaque legislative mandate. Protecting consumers against “deceptive acts or practices” does not amount to protecting privacy: companies remain at liberty to explicitly state they will do anything and everything with individuals’ data (and thus do not “deceive” anyone when they act on their promise). And prohibiting ‘‘unfair acts or practices” is as vague a legal standard as can be; in fact, in some legal systems it might be considered anathema to fundamental principles of jurisprudence (nullum crimen sine lege). While some have heralded an emerging “common law of FTC consent decrees”, such “common law” leaves much to be desired as it is based on non-transparent negotiations behind closed doors, resulting in short, terse orders.
This is why legislating the fundamental privacy principles, better known as the FIPPs (fair information practice principles), remains crucial. Without them, the FTC cannot do much more than enforce promises made in corporate privacy policies, which are largely acknowledged to be vacuous. Indeed, in its March 2012 “blueprint” for privacy protection, the White House called for legislation codifying the FIPPs (referred to by the White House as a “consumer privacy bill of rights”). Yet Washington insiders warn that the prospects of the FIPPs becoming law are slim, not only in an election year, but also after the elections, without major personnel changes in Congress.
July 30, 2012 at 7:47 pm Tags: co-regulation, data protection, multistakeholder, Privacy, right to be forgotten, self regulation, w3c Posted in: Cyber Civil Rights, Cyberlaw, International & Comparative Law, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Uncategorized Print This Post 3 Comments
posted by Omer Tene
Some consider the right to privacy a fundamental right for the rich, or even the rich and famous. It may be no coincidence that the landmark privacy cases in Europe feature names like Naomi Campbell, Michael Douglas, and Princess Caroline of Monaco. After all, if you lived eight-to-a-room in a shantytown in India, you would have little privacy and a lot of other problems to worry about. When viewed this way, privacy seems to be a matter of luxury; a right of spoiled teenagers living in six bedroom houses (“Mom, don’t open the door without knocking”).
To refute this view, scholars typically point out that throughout history, totalitarian regimes targeted the right to privacy even before they did free speech. Without privacy, individuals are cowed by authority, conform to societal norms, and self-censor dissenting speech – or even thoughts. As Michel Foucault observed in his interpretation of Jeremy Bentham’s panopticon, the gaze has disciplinary power.
But I’d like to discuss an entirely different counter-argument to the privacy-for-the-rich approach. This view was recently presented at the Privacy Law Scholar Conference in a great paper by Laura Moy and Amanda Conley, both 2011 NYU law graduates. In their paper, Paying the Wealthy for Being Wealthy: The Hidden Costs of Behavioral Marketing (I love a good title!), which is not yet available online, Moy and Conley argue that retailers harvest personal information to make the poor subsidize luxury goods for the rich.
This might seem audacious at first, but think of it this way: through various loyalty schemes, retailers collect data about consumers’ shopping habits. Naturally, retailers are most interested in data about “high value shoppers.” This is intuitively clear, given that that’s where the big money, low price sensitivity and broad margins are. It’s also backed by empirical evidence, which Moy and Conley reference. Retailers prefer to tend to those who buy saffron and Kobe Beef rather than to those who purchase salt and turkey. To woo the high value shoppers, they offer attractive discounts and promotions – use your loyalty card to buy Beluga caviar; get a free bottle of Champagne. Yet obviously the retailers can’t take a loss for their marketing efforts. Who then pays the price of the rich shoppers’ luxury goods? You guessed it, the rest of us – with price hikes on products like bread and butter.
July 26, 2012 at 2:05 am Tags: big data, data protection, discrimination, price discrimination, Privacy Posted in: Advertising, Conferences, Consumer Protection Law, Cyberlaw, Privacy, Privacy (Consumer Privacy), Technology, Uncategorized Print This Post 6 Comments
posted by Susan Freiwald
A congressional inquiry, which recently revealed that cell phone carriers disclose a huge amount of subscriber information to the government, has increased the concern that Big Brother tracks our cell phones. The New York Times reported that, in 2011, carriers responded to 1.3 million law enforcement demands for cell phone subscriber information, including text messages and location information. Because each request can acquire information on multiple people, law enforcement agencies have clearly obtained such information about many more of us than could possibly be worthy of suspicion. Representative Markey, who spearheaded the inquiry, has followed up with a thorough letter to Attorney General Holder that asks how the Justice Department could possibly protect privacy and civil liberties while acquiring such a massive amount of information.
Among many important questions, Representative Markey’s letter asks whether the DOJ continues to legally differentiate between historical (those produced from carrier records) and real-time (those produced after an order is issued) cell site location information and what legal standard the DOJ meets for each (or both). Traditionally, courts have accorded less protection to historical location data, which I have criticized as a matter of Fourth Amendment law in my amicus briefs and in my scholarship. The government’s applications for historical data in the Fifth Circuit case, which is currently considering whether agents seeking historical location data must obtain a warrant, provide additional evidence that the distinction between real-time and historical location data makes no sense.
Some background. Under the current legal rules for location acquisition by law enforcement, which are complex, confusing, and contested, law enforcement agents have generally been permitted to acquire historical location data without establishing probable cause and obtaining a warrant. Instead, they have had to demonstrate that the records are relevant to a law enforcement investigation, which can dramatically widen the scope of an inquiry beyond those actually suspected of criminal activity and yield the large number of disclosures that the recent congressional inquiry revealed. Generally, prospective (real-time) location information has required a higher standard, often a warrant based on probable cause, which has made it more burdensome to acquire and therefore more protected against excessive disclosure.
Some commentators and judges have questioned whether historical location data should be available on an easier to satisfy standard, positing the hypothetical that law enforcement agents could wait just a short amount of time for real-time information to become a record, and then request it under the lower standard. Doing so would clearly be an end run around both the applicable statute (ECPA) and the Fourth Amendment, which arguably accord less protection to historical information because it is stored as an ordinary business record and not because of the fortuity that it is stored for a short period of time.
It turns out that this hypothetical is more than just the product of concerned people’s imagination. The three applications in the Fifth Circuit case requested that stored records be created on an ongoing basis. For example, just after a paragraph that requests “historical cell-site information… for the sixty (60) days prior” to the order, one application requests “For the Target Device, after receipt and storage, records of other information… provided to the United States on a continuous basis contemporaneous with” the start or end of a call, or during a call if that information is available. The other two applications clarify that “after receipt and storage” is “intended to ensure that the information” requested “is first captured and recorded by the provider before being sent.” In other words, the government is asking the carrier to create stored records and then send them on as soon as they are stored.
To be clear, only one of the three applications applied for only a relevance-based court order to obtain the continuously-created stored data. That court order, used for historical data, has never been deemed sufficient for forward-looking data (as the continuously-created data would surely be as it would be generated after the order). The other two applications used a standard less than probable cause but more than just a relevance order. It is not clear if the request for forward-looking data under the historical standard was an inadvertent mistake or an attempt to mislead. But applications in other cases have much more clearly asked for forward-looking prospective data, and didn’t require that data to be momentarily stored. Why would the applications in this case request temporary storage if not at least to encourage the judge considering the application to grant it on a lower standard?
I am optimistic that the DOJ’s response to Representative Markey’s letter will yield important information about current DOJ practices and will further spur reform. In the meantime, the government’s current practice of using this intrusive tool to gather too much information about too many people cries out for formal legal restraint. Congress should enact a law requiring a warrant based on probable cause for all location data. It should not codify a meaningless distinction between historical and real-time data that further confuses judges and encourages manipulative behavior by the government.
July 17, 2012 at 4:50 pm Tags: cell site location data, DOJ, ECPA, Fourth Amendment, location data, Markey, Privacy, surveillance Posted in: Constitutional Law, Criminal Procedure, Current Events, Cyberlaw, Privacy (Electronic Surveillance), Technology Print This Post 2 Comments
posted by Peter Swire
Yesterday I gave a presentation on “The Right to Data Portability: Privacy and Antitrust Analysis” at a conference at the George Mason Law School. In an earlier post here, I asked whether the proposed EU right to data portability violates antitrust law.
I think the presentation helped sharpen the antitrust concern. The presentation first develops the intuition that consumers should want a right to data portability (RDP), which is proposed in Article 18 of the EU Data Protection Regulation. RDP seems attractive, at least initially, because it might prevent consumers getting locked in to a software platform, and because it advances the existing EU right of access to one’s own data.
Turning to antitrust law, I asked how antitrust law would consider a rule that, say, prohibits an operating system from being integrated with software for a browser. We saw those facts, of course, in the Microsoft case decided by the DC Circuit over a decade ago. Plaintiffs asserted an illegal “tying” arrangement between Windows and IE. The court rejected a per se rule against tying of software, because integration of software can have many benefits and innovation in software relies on developers finding new ways to put things together. The court instead held that the rule of reason applies.
RDP, however, amounts to a per se rule against tying of software. Suppose a social network offers a networking service and integrates that with software that has various features for exporting or not exporting data in various formats. We have the tying product (social network) and the tied product (module for export or not of data). US antitrust law has rejected a per se rule here. The EU proposed regulation essentially adopts a per se rule against that sort of tying arrangement.
Modern US and EU antitrust law seek to enhance “consumer welfare.” If the Microsoft case is correct, then a per se rule of the sort in the Regulation quite plausibly reduces consumer welfare. There may be other reasons to adopt RDP, as discussed in the slides (and I hope in my future writing). RDP might advance human rights to access. It might enhance openness more generally on the Internet. But it quite possibly reduces consumer welfare, and that deserves careful attention.
May 17, 2012 at 3:56 pm Tags: Antitrust, Privacy, right to data portability Posted in: Administrative Law, Antitrust, Cyberlaw, Economic Analysis of Law, Privacy (Consumer Privacy), Web 2.0 Print This Post No Comments
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Jane Yakowitz Bambauer entitled How the War on Drugs Distorts Privacy Law. Professor Yakowitz analyzes the opportunity the Supreme Court has to rewrite certain privacy standards in Florida v. Jardines:
The U.S. Supreme Court will soon determine whether a trained narcotics dog’s sniff at the front door of a home constitutes a Fourth Amendment search. The case, Florida v. Jardines, has privacy scholars abuzz because it presents two possible shifts in Fourth Amendment jurisprudence. First, the Court might expand the physical spaces rationale from Justice Scalia’s majority opinion in United States v. Jones. A favorable outcome for Mr. Jardines could reinforce that the home is a formidable privacy fortress, protecting all information from government detection unless that information is visible to the human eye.
Alternatively, and more sensibly, the Court may choose to revisit its previous dog sniff cases, United States v. Place and Illinois v. Caballes. This precedent has shielded dog sniffs from constitutional scrutiny by finding that sniffs of luggage and a car, respectively, did not constitute searches. Their logic is straightforward: since a sniff “discloses only the presence or absence of narcotics, a contraband item,” a search incident to a dog’s alert cannot offend reasonable expectations of privacy. Of course, the logical flaw is equally obvious: police dogs often alert when drugs are not present, resulting in unnecessary suspicionless searches.
Jardines offers the Court an opportunity to carefully assess a mode of policing that subjects all constituents to the burdens of investigation and punishment, not just the “suspicious.” Today, drug-sniffing dogs are unique law enforcement tools that can be used without either individualized suspicion or a “special needs” checkpoint. Given their haphazard deployment and erratic performance, police dogs deserve the skepticism many scholars and courts have expressed. But the wrong reasoning in Jardines could fix indefinitely an assumption that police technologies and civil liberties are always at odds. This would be unfortunate. New technologies have the potential to be what dogs never were—accurate and fair. Explosive detecting systems may eventually meet the standards for this test, and DNA-matching and pattern-based data mining offer more than mere hypothetical promise. Responsible use of these emerging techniques requires more transparency and even application than police departments are accustomed to, but decrease in law enforcement discretion is its own achievement. With luck, the Court will find a search in Jardines while avoiding a rule that reflexively hampers the use of new technologies.
May 9, 2012 at 7:30 am Tags: Criminal Law, Criminal Procedure, drug policy, law enforcement, Privacy, technology Posted in: Civil Rights, Criminal Law, Criminal Procedure, Law Rev (Stanford), Privacy, Privacy (Law Enforcement), Technology Print This Post No Comments
posted by Peter Swire
Greetings to Concurring Opinion readers. I thank the editors for inviting me to guest blog. I am looking forward to the opportunity to write more informally than I have done for a long time. I am out of the administration, and don’t have to go through the painful process of “clearing” every statement. And I am focusing on researching and writing rather than having clients. So the comments are just my own.
From the latter, I propose “multistakeholder” as the buzzword of the year so far. (“Context” is a close second, which I may discuss another time.) The Department of Commerce has received public comments on what should be done in the privacy multistakeholder process. (My own comment focused on the importance of defining “de-identified” information.)
Separately, the administration has been emphasizing the importance of multistakeholder processes for Internet governance, such as in a speech by Larry Strickling, Administrator of the National Telecommunications and Information Administration.
Here’s a try at making sense of this buzzword. On the privacy side, my view is that “multistakeholder” is mostly a substitute for the old term “self regulation.” Self regulation was the organizing theme when the U.S. negotiated the Safe Harbor agreement with the EU in 2000 for privacy. Barbara Wellbery (who lamentably is no longer with us) used “self regulation” repeatedly to explain the U.S. approach. The term accurately describes the legal regime under Section 5 of the FTC Act – an entity (all by itself) makes a promise, and then it’s legally enforceable by others. As I have written since the mid-1990’s, this self regulatory approach can be better than other approaches, depending on the context.
The term “self regulation”, however, has taken on a bad odor. Many European regulators consider “self regulation” as the theme of the Safe Harbor, which they consider weaker than it should have been. Many privacy advocates have also justifiably said that the term puts too much emphasis on the “self”, the company that decides what promises to make.
Enter stage left with the new term, “multistakeholder.” The term directly addresses the advocates’ issue. Advocates should be in the room, along with regulators, entities from affected industries, and perhaps a lot of other stakeholders. It’s not “self regulation” by a “selfish” company. It is instead a process that includes the range of players whose interests should be considered.
I am comfortable with the new term “multistakeholder” for the old “self regulation.” The two are different in the way that the new term includes more of those affected. They are the same, however, because they stand in contrast to top-down regulation by the government. Depending on the facts, multistakeholder may be better, or worse, than the government alternative.
Shifting to Internet governance, “multistakeholder” is a term that resonates with the bottom-up processes that led to the spectacular flowering of the Internet. Examples include organizations such as the Internet Engineering Task Force and the World Wide Web Consortium. Somehow, almost miraculously, the Web grew in twenty years from a tiny community to one numbering in the billions.
The term “multi-stakeholder” is featured in the important OECD Council Recommendation On Principles for Internet Policy Making, garnering 13 mentions in 10 pages. As I hope to discuss in a future blog post, this bottom-up process contrasts sharply with efforts, led by countries including Russia and China, to have the International Telecommunications Union play a major role in Internet governance. Emma Llansó at CDT has explained what is at stake. I am extremely skeptical about an expanded ITU role.
So, administration support for “multi stakeholder process” in both privacy and Internet governance. Similar in hoping that bottom-up beats top-down regulation. Different, I suspect, in how well the bottom-up has done historically. The IETF and the W3C have quite likely earned a grade in the A range for what they have achieved in Internet governance. I doubt that many people would give an A overall to industry self-regulation in the privacy area.
Reason to be cautious. The same word can work differently in different settings.
posted by Wake Forest Law Review
The Wake Forest Law Review Online has published an essay on internet privacy, online censorship and intellectual property rights: The Myth of Perfection by Derek E. Bambauer.
In The Myth of Perfection, Derek Bambauer explores the impact of the pursuit of perfection on internet privacy, online censorship and intellectual property protection. Bambauer argues that the “obsession” with perfection may threaten innovation and detract from more pressing privacy concerns. Ultimately, Bambauer concludes that in the place of perfection, “we should adopt the more realistic, and helpful, conclusion that often good enough is . . . good enough.”
Derek E. Bambauer, The Myth of Perfection, 2 Wake Forest L. Rev. Online 22 (2012), http://wakeforestlawreview.com/the-myth-of-perfection.
April 5, 2012 at 9:28 am Tags: censorship, Derek Bambauer, intellectual property rights, Myth of Perfection, Privacy, Wake Forest Law Review, Wake Forest Law Review Online Posted in: Law Review (Wake Forest) Print This Post No Comments
posted by Ted Striphas
I first happened across Julie Cohen’s work around two years ago, when I started researching privacy concerns related to Amazon.com’s e-reading device, Kindle. Law professor Jessica Littman and free software doyen Richard Stallman had both talked about a “right to read,” but never was this concept placed on so sure a legal footing as it was in Cohen’s essay from 1996, “A Right to Read Anonymously.” Her piece helped me to understand the illiberal tendencies of Kindle and other leading commercial e-readers, which are (and I’m pleased more people are coming to understand this) data gatherers as much as they are appliances for delivering and consuming texts of various kinds.
Truth be told, while my engagement with Cohen’s “Right to Read Anonymously” essay proved productive for this particular project, it also provoked a broader philosophical crisis in my work. The move into rights discourse was a major departure — a ticket, if you will, into the world of liberal political and legal theory. Many there welcomed me with open arms, despite the awkwardness with which I shouldered an unfamiliar brand of baggage trademarked under the name, “Possessive Individualism.” One good soul did manage to ask about the implications of my venturing forth into a notion of selfhood vested in the concept of private property. I couldn’t muster much of an answer beyond suggesting, sheepishly, that it was something I needed to work through.
It’s difficult and even problematic to divine back-story based on a single text. Still, having read Cohen’s latest, Configuring the Networked Self, I suspect that she may have undergone a crisis not unlike my own. The sixteen years spanning “A Right to Read Anonymously” and Configuring the Networked Self are enormous. I mean that less in terms of the time frame (during which Cohen was highly productive, let’s be clear) than in terms of the refinement in the thinking. Between 1996 and 2012 you see the emergence of a confident, postliberal thinker. This is someone who, confronted with the complexities of everyday life in highly technologized societies, now sees possessive individualism for what it is: a reductive management strategy, one whose conception of society seems more appropriate to describing life on a preschool playground than it does to forms of interaction mediated by the likes of Facebook, Google, Twitter, Apple, and Amazon.
In this Configuring the Networked Self is an extraordinary work of synthesis, drawing together a diverse array of fields and literatures: legal studies in its many guises, especially its critical variants; science and technology studies; human and computer interaction; phenomenology; post-structuralist philosophy; anthropology; American studies; and surely more. More to the point it’s an unusually generous example of scholarly work, given Cohen’s ability to see in and draw out of this material its very best contributions.
I’m tempted to characterize the book as a work of cultural studies given the central role the categories culture and everyday life play in the text, although I’m not sure Cohen would have chosen that identification herself. I say this not only because of the book’s serious challenges to liberalism, but also because of the sophisticated way in which Cohen situates the cultural realm.
This is more than just a way of saying she takes culture seriously. Many legal scholars have taken culture seriously, especially those interested in questions of privacy and intellectual property, which are two of Cohen’s foremost concerns. What sets Configuring the Networked Self apart from the vast majority of culturally inflected legal scholarship is her unwillingness to take for granted the definition — you might even say, “being” — of the category, culture. Consider this passage, for example, where she discusses Lawrence Lessig’s pathbreaking book Code and Other Laws of Cyberspace:
The four-part Code framework…cannot take us where we need to go. An account of regulation emerging from the Newtonian interaction of code, law, market, and norms [i.e., culture] is far too simple regarding both instrumentalities and effects. The architectures of control now coalescing around issues of copyright and security signal systemic realignments in the ordering of vast sectors of activity both inside and outside markets, in response to asserted needs that are both economic and societal. (chap. 7, p. 24)
What Cohen is asking us to do here is to see culture not as a domain distinct from the legal, or the technological, or the economic, which is to say, something to be acted upon (regulated) by one or more of these adjacent spheres. This liberal-instrumental (“Netwonian”) view may have been appropriate in an earlier historical moment, but not today. Instead, she is urging us to see how these categories are increasingly embedded in one another and how, then, the boundaries separating the one from the other have grown increasingly diffuse and therefore difficult to manage.
The implications of this view are compelling, especially where law and culture are concerned. The psychologist Abraham Maslow once said, “it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” In the old, liberal view, one wielded the law in precisely this way — as a blunt instrument. Cohen, for her part, still appreciates how the law’s “resolute pragmatism” offers an antidote to despair (chap. 1, p. 20), but her analysis of the “ordinary routines and rhythms of everyday practice” in an around networked culture leads her to a subtler conclusion (chap. 1, p. 21). She writes: “practice does not need to wait for an official version of culture to lead the way….We need stories that remind people how meaning emerges from the uncontrolled and unexpected — stories that highlight the importance of cultural play and of spaces and contexts within which play occurs” (chap. 10, p. 1).
It’s not enough, then, to regulate with a delicate hand and then “punt to culture,” as one attorney memorably put it an anthropological study of the free software movement. Instead, Cohen seems to be suggesting that we treat legal discourse itself as a form of storytelling, one akin to poetry, prose, or any number of other types of everyday cultural practice. Important though they may be, law and jurisprudence are but one means for narrating a society, or for arriving at its self-understandings and range of acceptable behaviors.
Indeed, we’re only as good as the stories we tell ourselves. This much Jaron Lanier, one of the participants in this week’s symposium, suggested in his recent book, You Are Not a Gadget. There he showed how the metaphorics of desktops and filing, generative though they may be, have nonetheless limited the imaginativeness of computer interface design. We deserve computers that are both functionally richer and experientially more robust, he insists, and to achieve that we need to start telling more sophisticated stories about the relationship of digital technologies and the human body. Lousy stories, in short, make for lousy technologies.
Cohen arrives at an analogous conclusion. Liberalism, generative though it may be, has nonetheless limited our ability to conceive of the relationships among law, culture, technology, and markets. They are all in one another and of one another. And until we can figure out how to narrate that complexity, we’ll be at a loss to know how to live ethically, or at the very least mindfully, in an a densely interconnected and information rich world. Lousy stories make for lousy laws and ultimately, then, for lousy understandings of culture.
The purposes of Configuring the Networked Self are many, no doubt. For those of us working in the twilight zone of law, culture, and technology, it is a touchstone for how to navigate postliberal life with greater grasp — intellectually, experientially, and argumentatively. It is, in other words, an important first chapter in a better story about ordinary life in a high-tech world.
posted by Stanford Law Review
The Stanford Law Review Online has just published a piece by M. Ryan Calo discussing the privacy implications of drone use within the United States. In The Drone as Privacy Catalyst, Calo argues that domestic use of drones for surveillance will go forward largely unimpeded by current privacy law, but that the “visceral jolt” caused by witnessing these drones hovering above our cities might serve as a catalyst and finally “drag privacy law into the twenty-first century.”
In short, drones like those in widespread military use today will tomorrow be used by police, scientists, newspapers, hobbyists, and others here at home. And privacy law will not have much to say about it. Privacy advocates will. As with previous emerging technologies, advocates will argue that drones threaten our dwindling individual and collective privacy. But unlike the debates of recent decades, I think these arguments will gain serious traction among courts, regulators, and the general public.
December 12, 2011 at 4:52 pm Tags: academia, Brandeis, Constitutional Law, drones, Kyllo, Privacy, surveillance, UAVs, Warren Posted in: Constitutional Law, Law Rev (Stanford), Law School (Law Reviews), Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (National Security), Technology Print This Post No Comments
posted by Scott Peppet
The biometric technologies firm Hoyos (previously Global Rainmakers Inc.) recently announced plans to test massive deployment of iris scanners in Leon, Mexico, a city of over a million people. They expect to install thousands of the devices, some capable of picking out fifty people per minute even at regular walking speeds. At first the project will focus on law enforcement and improving security checkpoints, but within three years the plan calls for integrating iris scanning into most commercial locations. Entry to stores or malls, access to an ATM, use of public transportation, paying with credit, and many other identity-related transactions will occur through iris-scanning & recognition. (For more details, see Singularity’s post with videos.) Hoyos has the backing to make this happen: on October 12th they also announced new investment of over $40M to fund their growth.
There are obviously lots of interesting privacy- and tech-related issues here. I’ll focus on one: the company’s roll-out strategy is explicitly premised on the unraveling of privacy created by the negative inferences & stigma that will attach to those who choose not to participate. Criminals will automatically be scanned and entered into the database upon conviction. Jeff Carter, Chief Development Officer at Hoyos, expects law abiding citizens to participate as well, however. Some will do so for convenience, he says, and then he expects everyone to follow: “When you get masses of people opting-in, opting out does not help. Opting out actually puts more of a flag on you than just being part of the system. We believe everyone will opt-in.” (For the full interview, see Fast Company’s post on the project.)
In a forthcoming article, I’ve written at length about the unraveling effect and why it now poses a serious threat to privacy. This biometric deployment is one of many examples, but it most explicitly illustrates that unraveling has moved beyond unexpected consequence to become corporate strategy.
November 6, 2010 at 4:05 pm Tags: Privacy Posted in: Anonymity, Economic Analysis of Law, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Law Enforcement), Uncategorized Print This Post 4 Comments
On the Colloquy: The Credit Crisis, Refusal-to-Deal, Procreation & the Constitution, and Open Records vs. Death-Related Privacy Rights
posted by Northwestern University Law Review
This summer started off with a three part series from Professor Olufunmilayo B. Arewa looking at the credit crisis and possible changes that would focus on averting future market failures, rather than continuing to create regulations that only address past ones. Part I of Prof. Arewa’s looks at the failure of risk management within the financial industry. Part II analyzes the regulatory failures that contributed to the credit crisis as well as potential reforms. Part III concludes by addressing recent legislation and whether it will actually help solve these very real problems.
Next, Professors Alan Devlin and Michael Jacobs take on an issue at the “heart of a highly divisive, international debate over the proper application of antitrust laws” – what should be done when a dominant firm refuses to share its intellectual property, even at monopoly prices.
Professor Carter Dillard then discussed the circumstances in which it may be morally permissible, and possibly even legally permissible, for a state to intervene and prohibit procreation.
Rounding out the summer was Professor Clay Calvert’s article looking at journalists’ use of open record laws and death-related privacy rights. Calvert questions whether journalists have a responsibility beyond simply reporting dying words and graphic images. He concludes that, at the very least, journalists should listen to the impact their reporting has on surviving family members.
September 5, 2010 at 1:15 pm Tags: Antitrust, Constitutional Law, copyright, discrimination, financial crisis, free speech, Intellectual Property, Privacy, trademark Posted in: Antitrust, Bioethics, Civil Rights, Constitutional Law, Corporate Finance, First Amendment, Intellectual Property, Privacy, Securities, Securities Regulation Print This Post No Comments
posted by Gaia Bernstein
A lot has been written on Facebook and its users loss of privacy. In fact, for some, Facebook and loss of privacy have become synonyms. A major fear involves the use of Facebook users’ personal information by information aggregators who will use the data to target the sale of products. I do not intend to contest here that Facebook users disclose a lot of personal information. But, I want to look at how accurate is the information that Facebook users reveal on Facebook.
When people surf the Internet their personal information, websites and searches are collected by cookies. As I have written, people tend to disregard these privacy threats at least partly due to their lack of visibility. Even those who know that their information can be collected by cookies, tend to forget it as they use the Internet on a daily basis. As a result the information collected by cookies reveals relatively true preferences. Cookies will reveal embarrassing or secret facts, such as visits to pornography sites or to medical sites to investigate a worrying medical condition.
But Facebook is different. Facebook users are constantly aware they are being viewed. True, they may not be thinking about the companies that may eventually aggregate the information. But, for sure they are thinking of the hundreds of friends who will be reading their status updates, examining their favorite books, favorite movies and linked websites. Facebook users “package” themselves. They present themselves to the world the way they want to be perceived. Their real preferences and tastes may be somewhat or even completely different from those they present on Facebook. A criminal law professor may have in her Facebook library collection legal theory books, while in fact in her spare time she is an avid purchaser and reader of chick lit books. A twenty year old college student may want to appear cool placing links to trendy music, although his real passion remains the collection of Star Wars figures.
Some information on Facebook, such as date of birth or marriage status is less likely to be mispresented by users and provides rich ground for data mining. But Facebook users “packaging” raises two issues. Companies seeking to target consumers with products they actually want to purchase may find Facebook information less useful than believed. And from a privacy perspective, it is not merely the disclosure of true personal information that we should be concerned about but the creation of false or misleading individual profiles by data mining companies that can eventually change the information and consumption options available to these Facebook users.
posted by Gaia Bernstein
Researchers can gain significant genetic information by studying indigenous and preferably isolated populations. Although both researchers and indigenous populations can gain from this collaboration, the two groups often do not see eye to eye. This was the case of the collaboration between the Havasupai Indians and researchers from Arizona State University, which resulted in a long legal fight. The Havasupai Indians were suffering from high prevalence of diabetes and agreed to give their blood samples for genetic research on Diabetes. The members of the tribe were infuriated when they found out later that their blood samples were used for other purposes, among them genetic research on schizophrenia.
The New York Times reported yesterday that this conflict resulted in a settlement in which Arizona State University agreed to pay $700,000 to the tribe members and also return the blood samples. The Havasupai Indians’ main legal claim was of violation of informed consent. Informed consent requires that patients and research subjects receive full information that will enable them to decide whether to adopt a certain medical treatment plan or participate in research. Here, the Havasupai Indians argued that the informed consent principle was violated because they were told that their blood samples will be used for one purpose while, in fact, they were used for another.
No doubt, the Havasupai Indians informed consent argument resulted in their victorious settlement. But, the harder question is whether informed consent principle can be feasibly applied in the area of genetics. Genetic information is not just individual information it also provides information about groups and families. For example, assume there is a tribe in which some members agree to participate in genetic research investigating Manic Depression. Other members of the tribe refuse because they are concerned that a result showing that there is a prevalent genetic mutation for Manic Depression among them could stigmatize them and even lead to discrimination against the tribe. The researchers collect samples only from the members of the group who agree to the research. But, the results still provide genetic information on all members of the tribe even those who refused to participate because of their genetic connection to those who participated.
The result in the Havasupai settlement cannot be seen then as a victory for the principle of informed consent in the area of genetics. Restricting genetic researchers to use of samples only for the purpose for which they were collected only partly resolves the informed consent problem. The group nature of genetic information makes the application of informed consent to genetic research much more complicated than that.