Category: Cyberlaw

T
0

The U.S. Supreme Court’s 4th Amendment and Cell Phone Case and Its Implications for the Third Party Doctrine

Today, the U.S. Supreme Court handed down a decision on two cases involving the police searching cell phones incident to arrest. The Court held 9-0 in an opinion written by Chief Justice Roberts that the Fourth Amendment requires a warrant to search a cell phone even after a person is placed under arrest.

The two cases are Riley v. California and United States v. Wurie, and they are decided in the same opinion with the title Riley v. California. The Court must have chosen toname the case after Riley to make things hard for criminal procedure experts, as there is a famous Fourth Amendment case called Florida v. Riley, 488 U,S, 445 (1989), which will now create confusion whenever someone refers to the “Riley case.”

Fourth Amendment Warrants

As a general rule, the government must obtain a warrant before engaging in a search. A warrant is an authorization by an independent judge or magistrate that is given to law enforcement officials after they properly justify their reason for conducting the search. There must be probable cause to search — a reasonable belief that the search will turn up evidence of a crime. The warrant requirement is one of the key protections of privacy because it ensures that the police just can’t search on a whim or a hunch. They must have a justified basis to search, and that must be proven before an independent decisionmaker (the judge or magistrate).

The Search Incident to Arrest Exception

But there are dozens of exceptions where government officials don’t need a warrant to conduct a search. One of these exceptions is a search incident to arrest. This exception allows police officers to search property on or near a person who has been arrested. In Chimel v. California, 395 U.S. 752 (1969), the Supreme Court held that the police could search the area near an arrestee’s immediate control. The rationale was that waiting to get a warrant might put police officers in danger in the event arrestees had hidden dangerous items hidden on them or that arrestees would have time to destroy evidence. In United States v. Robinson, 414 U.S. 218 (1973), the Court held that there doesn’t need to be identifiable danger in any specific case in order to justify searches incident to arrest. Police can just engage in such a search as a categorical rule.

What About Searching Cell Phones Incident to Arrest?

In today’s Riley case, the Court examined whether the police are allowed to search data on a cell phone incident to arrest without first obtaining a warrant. The Court held that cell phone searches should be treated differently from typical searches incident to arrest because cell phones contain so much data and present a greater invasion of privacy than more limited searches for physical objects: “Cell phones, however, place vast quantities of personal information literally in the hands of individuals. A search of the information on a cell phone bears little resemblance to the type of brief physical search considered in Robinson.”

Read More

0

The data retention judgment, the Irish Facebook case, and the future of EU data transfer regulation

On April 8 the Court of Justice of the European Union (CJEU) announced its judgment in the case C-293/12 and C-594/12 Digital Rights Ireland. Based on EU fundamental rights law, the Court invalidated the EU Data Retention Directive, which obliged telecommunications service providers and Internet service providers in the EU to retain telecommunications metadata and make it available to European law enforcement authorities under certain circumstances. The case illustrates both the key role that the EU Charter of Fundamental Rights plays in EU data protection law, and the CJEU’s seeming disinterest in the impact of its recent data protection rulings on other fundamental rights. In addition, the recent referral to the CJEU by an Irish court of a case involving data transfers by Facebook under the EU-US Safe Harbor holds the potential to further tighten EU rules for data transfers, and to reduce the possibility of EU-wide harmonization in this area.

In considering the implications of Digital Rights Ireland for the regulation of international data transfers, I would like to focus on a passage occurring towards the end of the judgment, where the Court criticizes the Data Retention Directive as follows (paragraph 68):

“[I]t should be added that that directive does not require the data in question to be retained within the European Union, with the result that it cannot be held that the control, explicitly required by Article 8(3) of the Charter, by an independent authority of compliance with the requirements of protection and security, as referred to in the two previous paragraphs, is fully ensured. Such a control, carried out on the basis of EU law, is an essential component of the protection of individuals with regard to the processing of personal data…”

This statement caught many observers by surprise. The CJEU is famous for the concise and self-referential style of its opinions, and the case revolved around the legality of the Directive in general, not around whether data stored under it could be transferred outside the EU. This issue was also not raised in the submission of the case to the Court, and first surfaced in the advisory opinion issued by one of the Court’s advocates-general prior to the judgment (see paragraph 78 of that Opinion).

In US constitutional law, the question “does the constitution follow the flag?” generally arises in the context of whether the Fourth Amendment to the US Constitution applies to government activity overseas (e.g., when US law enforcement abducts a fugitive abroad and brings him back to the US). In the context discussed here, the question is rather whether EU data protection law applies to personal data as they are transferred outside the EU, i.e., “whether the EU flag follows EU data”. As I explained in my book on the regulation of transborder data flows that was published last year by Oxford University Press, in many cases EU data protection law remains applicable to personal data transferred to other regions. For example, in introducing its proposed reform of EU data protection law, the European Commission stated in 2012 that one of its key purposes is to “ensure a level of protection for data transferred out of the EU similar to that within the EU”.

EU data protection law is based on constitutional provisions protecting fundamental rights (e.g., Article 8 of the EU Charter of Fundamental Rights), and the CJEU has emphasized in cases involving the independence of the data protection authorities (DPAs) in Austria, Germany, and Hungary that control of data processing by an independent DPA is an essential element of the fundamental right to data protection (without ever discussing independent supervision in the context of data processing outside the EU). In light of those previous cases, the logical consequence of the Court’s statement in Digital Rights Ireland would seem to be that fundamental rights law requires oversight of data processing by the DPAs also with regard to the data of EU individuals that are transferred to other regions.

This conclusion raises a number of questions. For example, how can it be reconciled with the fact that the enforcement jurisdiction of the DPAs ends at the borders of their respective EU Member States (see Article 28 of the EU Data Protection Directive 95/46)? If supervision by the EU DPAs extends already by operation of law to the storage of EU data in other regions, then why do certain EU legal mechanisms in addition force the parties to data transfers to explicitly accept the extraterritorial regulatory authority of the DPAs (e.g., Clause 5(e) of the EU standard contractual clauses of 2010)? And how does the Court’s statement fit with its 2003 Lindqvist judgment, where it held that EU data protection law should not be interpreted to apply to the entire Internet (see paragraph 69 of that judgment)? The offhand way in which the Court referred to DPA supervision over data processing outside the EU in the Digital Rights Ireland judgment gives the impression that it was unaware of, or disinterested in, such questions.

On June 18 the Irish High Court referred a case to the CJEU that may develop further its line of thought in the Digital Rights Ireland judgment. The High Court’s judgment in Schrems v. Data Protection Commissioner involved a challenge by Austrian student Max Schrems to the transfer of personal data to the US by Facebook under the Safe Harbor. The High Court announced that it would refer to the CJEU the questions of whether the European Commission’s adequacy decision of 2000 creating the Safe Harbor should be re-evaluated in light of the Charter of Fundamental Rights and widespread access to data by US law enforcement, and of whether the individual DPAs should be allowed to determine whether the Safe Harbor provides adequate protection (see paragraphs 71 and 84). The linkage between the two cases is evidenced by the Irish High Court’s frequent citation of Digital Rights Ireland, and by the CJEU’s conclusion that interference with the right to data protection caused by widespread data retention for law enforcement purposes without notice being given to individuals was “particularly serious” (see paragraph 37 of Digital Rights Ireland and paragraph 44 of Schrems v. Data Protection Commissioner). The High Court also criticized the Safe Harbor and the system of oversight of law enforcement data access in the US as failing to provide oversight “carried out on European soil” (paragraph 62), which seems inspired by paragraph 68 of the Digital Rights Ireland judgment.

The Irish referral to the CJEU also holds implications for the possibility of harmonized EU rules regarding international data transfers. If each DPA is allowed to override Commission adequacy decisions based on its individual view of what the Charter of Fundamental Rights requires, then there would be no point to such decisions in the first place (and the current disagreement over the “one stop shop” in the context of the proposed EU General Data Protection Regulation shows the difficulty of reaching agreement on pan-European rules where fundamental rights are at stake). Also, one wonders if other data transfer mechanisms beyond the Safe Harbor could also be at risk (e.g., standard contractual clauses, binding corporate rules, etc.), given that they also allow data to be turned over to non-EU law enforcement authorities. The proposed EU General Data Protection Regulation could eliminate some of these risks, but its passage is still uncertain, and the interpretation by the Court of the role of the Charter of Fundamental Rights would still be relevant under it. Whatever the CJEU eventually decides, it seems inevitable that the case will result in a tightening of EU rules on international data transfers.

The referral by the Irish High Court also raises the question (which the High Court did not address) of how other important fundamental rights, such as freedom of expression and the right to communicate internationally (meaning, in essence, the freedom to communicate on the Internet), should be balanced with the right to data protection. In its recent jurisprudence, the CJEU seems to regard data protection as a “super right” that has preference over other ones; thus, in its recent judgment in the case C-131/12 Google Spain v. AEPD and Mario Costeja Gonzalez involving the “right to be forgotten”, the Court never even refers to Article 11 of the Charter of Fundamental Rights that protects freedom of expression and the right to “receive and impart information and ideas without interference by public authority and regardless of frontiers”. In its zeal to protect personal data transferred outside the EU, it is important that the CJEU not forget that, as it has stated in the past, data protection is not an absolute right, and must be considered in relation to its function in society (see, for example, Joined Cases C-92/09 and C-93/09 Volker und Markus Schecke, paragraph 48), and that there must be some territorial limit to EU data protection law, if it is not to become a system of universal application that applies to the entire world (as the Court held in Lindqvist). Thus, there is an urgent need for an authoritative and dispassionate analysis of the territorial limits to EU data protection law, and of how a balance can be struck between data protection and other fundamental rights, guidance which unfortunately the CJEU seems unwilling to provide.

2

Computable Contracts Explained – Part 1

Computable Contracts Explained – Part 1

I had the occasion to teach “Computable Contracts” to the Stanford Class on Legal Informatics recently.  Although I have written about computable contracts here, I thought I’d explain the concept in a more accessible form.

I. Overview: What is a Computable Contract?

What is a Computable Contract?   In brief, a computable contract is a contract that a computer can “understand.” In some instances, computable contracting enables a computer to automatically assess whether the terms of a contract have been met.

How can computers understand contracts?  Here is the short answer (a more in-depth explanation appears below).  First, the concept of a computer “understanding” a contract is largely a metaphor.   The computer is not understanding the contract at the same deep conceptual or symbolic level as a literate person, but in a more limited sense.  Contracting parties express their contract in the language of computers – data – which allows the computer to reliably identify the contract components and subjects.  The parties also provide the computer with a series of rules that allow the computer to react in a sensible way that is consistent with the underlying meaning of the contractual promises.

Aren’t contracts complex, abstract, and executed in environments of legal and factual uncertainty?  Some are, but some aren’t. The short answer here is that the contracts that are made computable don’t involve the abstract, difficult or relatively uncertain legal topics that tend to occupy lawyers.  Rather (for the moment at least), computers are typically given contract terms and conditions with relatively well-defined subjects and determinable criteria that tend not to involve significant legal or factual uncertainty in the average case.

For this reason, there are limits to computable contracts: only small subsets of contracting scenarios can be made computable.  However, it turns out that these contexts are economically significant. Not all contracts can be made computable, but importantly, some can.

Importance of Computable Contracts 

There are a few reasons to pay attention to computable contracts.   For one, they have been quietly appearing in many industries, from finance to e-commerce.  Over the past 10 years, for instance, many modern contracts to purchase financial instruments (e.g. equities or derivatives) have transformed from traditional contracts, to electronic, “data-oriented” computable contracts.   Were you to examine a typical contract to purchase a standardized financial instrument these days, you would find that it looked more like a computer database record (i.e. computer data), and less like lawyerly writing in a Microsoft Word document.

Computable contracts also have new properties that traditional, English-language, paper contracts do not have.  I will describe this in more depth in the next post, but in short, computable contracts can serve as inputs to other computer systems.  These other systems can take computable contracts and do useful analysis not readily done with traditional contracts. For instance, a risk management system at a financial firm can take computable contracts as direct inputs for analysis, because, unlike traditional English contracts, computable contracts are data objects themselves.

II. Computable Contracts in More Detail

Having had a brief overview of computable contracts, the next few parts will discuss computable contracts in more detail.

A. What is a Computable Contract?

To understand computable contracts, it is helpful to start with a simple definition of a contract generally. 

A contract (roughly speaking) is a promise to do something in the future, usually according to some specified terms or conditions, with legal consequences if the promise is not performed.   For example, “I promise to sell you 100 shares of Apple stock for $400 per share on January 10, 2015.”

computable contract is a contract that has been deliberately expressed by the contracting parties in such a way that a computer can:

1) understand what the contract is about;

2) determine whether or not the contract’s promises have been complied with (in some cases).

How can a computer “understand” a contract, and how can compliance with legal obligations be “computed” electronically?

To comprehend this, it is crucial to first appreciate the particular problems that computable contracts were developed to address.

Read More

1

Hitting Back When Hit By Google

Tuesday’s European Court of Justice decision requires internet search engines to omit listing irrelevant or inadequate items in response to searches for individuals by name. The ruling is simultaneously hailed and condemned, depending on whether one stresses individual control over reputation or anti-censorship (e.g.. Henry Farrell in WaPo; Jonathan Zittrain in NYT; the ubiquitous Brian Leiter).  Two aspects of the incentive effects of the recurring problem seem overlooked, as illustrated by a true story (with minor fact changes in the name of privacy).

A few years ago, a colleague got a blistering review of his teaching from a student blog.  There may have been some underlying basis for the criticism, but the post blew it all out of proportion and offered no context for the specific objection and no counterbalancing assessment of the teacher’s considerable strengths. It was both authoritative and damning as well as inadequate and of dubious relevance.

My friend’s distress intensified when this url appeared first in all searches for his name using Ask, Bing, Google, Yahoo! and other search tools.  It came up ahead of the professor’s SSRN page, school biography, library bibliography, and laudatory references in numerous other urls on the web. The result magnified the post’s significance and caused my colleague anguish.

The blog publisher refused his request to take down the post, citing forum policies on open-access, autonomy, and self-regulation.  At that time, at least, the search engines could not be bothered. Day after day, we’d do a search of his name and the inflammatory post kept coming up number one, threatening the professor’s reputation.

Finally overcoming his frustration, the professor chose to fight fire with fire.  He created a new blog and began posting entries at a regular clip.  Gradually, these posts and responses or references to them rose up the lists of hits for his name.  Eventually, the objectionable link sank down the list into a more proportionate presence, there as part of a more complete portrait, not the salient bruise it started out as.

The episode also emboldened my friend to redouble his investment in teaching.  Accepting the old adage that “where there’s smoke, there’s fire”, he vowed to minimize the chances that such postings, however acontextual or lopsided, would reappear.  His teaching evaluations, in fact, rose from just above average to well above average.

There are obviously many more significant complex issues associated with the hierarchy or presence of misleading or irrelevant information on the internet.  For example, norms in Europe may differ from those in the U.S., and a ruling like that of the ECJ seems unlikely in America.  And there are probably better forums to solve the problem than courthouses, including legislators, markets, and think tanks.

But in struggling with associated trade-offs and conflicting values, the incentive effects should be noted.   I don’t want negative urls polluting my public persona.  But that produces two positive results: I try to avoid doing anything that would feed them and to engage enough to neutralize their effects on my profile.  It worked for my old friend.

0

ROUNDUP: Media Law 05.07.2014

 

Non-traditional media is the focus today. First up is “net neutrality.” The Federal Communications Commission refers to net neutrality as the Open Internet, and had promulgated a rule back in 2010 designed to promote it. Under the principle of net neutrality, service providers cannot discriminate among users or information providers in terms of price or quality of service.  Because many Internet service providers are cable companies, for example, they are not traditional “common carriers,” (telephone companies, for example), and don’t come under the same kind of FCC regulation as do telephone companies. Therein lies the problem for the FCC.

Verizon challenged the FCC’s statutory authority to regulate it and other non-traditional Internet service providers under the principle of net neutrality, bringing a suit in federal court. On January 14, the U. S. Court of Appeals for the D.C. Circuit agreed with Verizon that the agency had exceeded its authority. Several of the FCC Commissioners are now considering whether another stab at regulation is a wise idea. Commissioner Ajit Pai has testified before the Senate Subcommittee on Financial Services and General Govenrment of the U.S. Senate Committee on Appropriations that he thinks net neutrality is an “unnecessary distraction,” and that other FCC priorities, including auctioning off more of the spectrum as required under the Spectrum Act,  are more important. FCC Chair Tom Wheeler, by contrast, has issued a statement saying he intends to offer revamped rules that respond to the Verizon decision. He says in part, “We will carefully consider how Section 706  might be used to protect and promote an Open Internet consistent with the D.C. Circuit’s opinion
and its earlier affirmance of our Data Roaming Order. Thus, we will consider (1) setting an enforceable legal standard that provides guidance and predictability to edge providers,
consumers, and broadband providers alike; (2) evaluating on a case-by-case basis whether that  standard is met; and (3) identifying key behaviors by broadband providers that the Commission would view with particular skepticism.” Many FCC watchers have reacted with, at best, skepticism, even though they have not yet seen the proposal, which the FCC will consider at its May 15 meeting. The FCC has opened up a digital “in box” to accept public comments here.

On April 3, the European Parliament threw its weight behind net neutrality, voting to adopt a net neutrality proposal which would provide equality for end users and end roaming charges by 2016.

On April 2, the Writers Guild and Hollywood’s movie producers (the Alliance of Motion Picture and Television Producers, or AMPTP) reached a three year deal that spells out some important guarantees for writers on scripted shows, including a guaranteed salary increase, payments into pension funds, and agreements with regard to streaming. Members of WGA must still vote on the contract, but industry watchers seem to think that the vote will be much less contentious than that in 2008, for example, which followed on a more than 3 month strike. That ugly negotiation was the first during which new media became an issue for both sides. David Robb discusses the long term effects of the WGA strike here. The Writers Guild members voted overwhelmingly to ratify the contract (98.5 percent to 1.5 percent) at the end of April. Up next, the SAG/AFTRA (Screen Actors Guild/American Federation of Television & Radio Artists) negotiations with AMPTP. While SAG and AFTRA are now one union, the two still have separate contracts with AMPTP.

Read More

P
0

The FTC and the New Common Law of Privacy

I’m pleased to announce that my article with Professor Woodrow Hartzog, The FTC and the New Common Law of Privacy, 114 Colum. L. Rev. 583 (2014), is now out in print.  You can download the final published version at SSRN.  Here’s the abstract:

One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite over fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States — more so than nearly any privacy statute or any common law tort.

In this Article, we contend that the FTC’s privacy jurisprudence is functionally equivalent to a body of common law, and we examine it as such. We explore how and why the FTC, and not contract law, came to dominate the enforcement of privacy policies. A common view of the FTC’s privacy jurisprudence is that it is thin, merely focusing on enforcing privacy promises. In contrast, a deeper look at the principles that emerge from FTC privacy “common law” demonstrates that the FTC’s privacy jurisprudence is quite thick. The FTC has codified certain norms and best practices and has developed some baseline privacy protections. Standards have become so specific they resemble rules. We contend that the foundations exist to develop this “common law” into a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, extends far beyond privacy policies, and involves a full suite of substantive rules that exist independently from a company’s privacy representations.

P
0

FTC v. Wyndham

The case has been quite long in the making. The opinion has been eagerly anticipated in privacy and data security circles. Fifteen years of regulatory actions have been hanging in the balance. We have waited and waited for the decision, and it has finally arrived.

The case is FTC v. Wyndham, and it is round one to the Federal Trade Commission (FTC).

Some Quick Background

For the past 15 years, the FTC has been one of the leading regulators of data security. It has brought actions against companies that fail to provide common security safeguards on personal data. The FTC has claimed that inadequate data security violates the FTC Act which prohibits “unfair or deceptive acts or practices in or affecting commerce.” In many cases, the FTC has alleged that inadequate data security is deceptive because it contradicts promises made in privacy policies that companies will protect people’s data with “good,” “adequate,” or “reasonable” security measures. And in a number of cases, the FTC has charged that inadequate data security is unfair because it creates actual or likely unavoidable harm to consumers which isn’t outweighed by other benefits.

For more background about the FTC’s privacy and data security enforcement, please see my article with Professor Woodrow Hartzog: The FTC and the New Common Law of Privacy, 114 Colum. L. Rev. 583 (2014). The article has just come out in print, and the final published version can be downloaded for free here.

Thus far, when faced with an FTC data security complaint, companies have settled. But finally one company, Wyndham Worldwide Corporation, challenged the FTC. A duel has been waging in court. The battle has been one of gigantic proportions because so much is at stake: Wyndham has raised fundamental challenges the FTC’s power to regulate data security under the FTC Act.

The Court’s Opinion and Some Thoughts

1. The FTC’s Unfairness Authority

Wyndham argued that because Congress enacted several data security laws to regulate specific industries (FCRA, GLBA, HIPAA, COPPA) that Congress did not intend for the FTC to be able to regulate data security more generally under FTC Act unfairness. The court rejected this argument, holding that “subsequent data-security legislation seems to complement—not preclude—the FTC’s authority.”

This holding seems quite reasonable, as the FTC Act was a very broad grant of authority to the FTC to regulate for consumer protection for most industries.

Read More

6

Protecting the Precursors to Speech and Action

The Constitution cares deeply about the pre-cursors to speech. Calo wondered where my paper, Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding, parts ways with Solove; it does and it doesn’t. On the one hand, I agree with Dan’s work and build it out. I of course look to the First Amendment as part of understanding what associational freedom is. I also want that understanding to inform criminal procedure. On the other hand, I think that the Fourth Amendment on its own has strong protection for associational freedom. I thus argue that we have missed that aspect of the Fourth Amendment. Furthermore, since Solove and after him Kathy Strandburg, wrote about First Amendment connections to privacy, there has been some great work by Ashutosh Bhagwat, Tabatha Abu El-Haj, John Inazu, on the First Amendment and associational freedom. And Jason Mazzone started some of that work in 2002. I draw on that work to show what associational freedom is. Part of the problem is that when we look to how and why we protect associational freedom, we mistake what it is. That mistake means Fourth Amendment becomes too narrow. We are stuck with protection only for speech acts and associations that speak.

As I put it in the paper:

Our current understanding of associational freedom is thin. We over-focus on speech and miss the importance of the precursors to speech—the ability to share, explore, accept, and reject ideas and then choose whether to speak. Recent work has shown, however, that the Constitution protects many activities that are not speech, for example petition and assembly, because the activities enable self-governance and foster the potential for speech. That work has looked to the First Amendment. I show that these concerns also appear in Fourth Amendment jurisprudence and work to protect us from surveillance regardless of whether the acts are speech or whether they are private.

In that sense I give further support to work by Julie Cohen, Neil Richards, Spiros Simitis, and Solove by explaining that all the details that many have identified as needing protection (e.g., our ability to play; protection from surveillance of what we read and watch) align with core ideals of associational freedom. This approach thus offers a foundation for calls to protect us from law enforcement’s ability to probe our reading, meeting, and gathering habits—our associational freedom—even though those acts are not private or speech, and it explains what the constitutional limits on surveillance in the age of data hoarding must be.

1

It’s About Data Hoards – My New Paper Explains Why Data Escrow Won’t Protect Privacy

A core issue in U.S. v. Jones has noting to do with connecting “trivial” bits of data to see a mosaic; it is about the simple ability to have a perfect map of everywhere we go, with whom we meet, what we read, and more. It is about the ability to look backward and see all that information with little to no oversight and in a way forever. That is why calls to shift the vast information grabs to a third party are useless. The move changes little given the way the government already demands information from private data hoards. Yes, not having immediate access to the information is a start. That might mitigate mischief. But clear procedures are needed before that separation can be meaningful. That is why telecom and tech giants should be wary of “The central pillar of Obama’s plan to overhaul the surveillance programs [which] calls for shifting storage of Americans’ phone data from the government to telecom companies or an independent third party.” It does not solve the problem of data hoards.

As I argue in my new article Constitutional Limits on Surveillance: Associational Freedom in the Age of Data Hoarding:

Put differently, the tremendous power of the state to compel action combined with what the state can do with technology and data creates a moral hazard. It is too easy to harvest, analyze, and hoard data and then step far beyond law enforcement goals into acts that threaten civil liberties. The amount of data available to law enforcement creates a type of honey pot—a trap that lures and tempts government to use data without limits. Once the government has obtained data, it is easy and inexpensive to store and search when compared to storing the same data in an analog format. The data is not deleted or destroyed; it is hoarded. That vat of temptation never goes away. The lack of rules on law enforcement’s use of the data explains why it has an incentive to gather data, keep it, and increase its stores. After government has its data hoard, the barriers to dragnet and general searches—ordinarily unconstitutional—are gone. If someone wishes to dive into the data and see whether embarrassing, or even blackmail worthy, data is available, they can do so at its discretion; and in some cases law enforcement has said they should pursue such tactics. These temptations are precisely why we must rethink how we protect associational freedom in the age of data hoarding. By understanding what associational freedom is, what threatens it, and how we have protected it in the past, we will find that there is a way to protect it now and in the future.

2

Robotics and the New Cyberlaw

Cyberlaw is the study of the intersection between law and the Internet.  It should come as no surprise, then, that the defining questions of cyberlaw grew out of the Internet’s unique characteristics.  For instance: an insensitivity to distance led some courts to rethink the nature of jurisdiction.  A tendency, perhaps hardwired, among individuals and institutions to think of “cyberspace” as an actual place generated a box of puzzles around the nature of property, privacy, and speech.

We are now well in to the cyberlaw project.  Certain questions have seen a kind of resolution.  Mark Lemley collected a few examples—jurisdiction, free speech, the dormant commerce clause—back in 2003.  Several debates continue, but most deep participants are at least familiar with the basic positions and arguments.  In privacy, for example, a conversation that began around an individual’s control over their own information has evolved into a conversation about the control information affords over individuals to whoever holds it.  In short, the twenty or so years legal and other academics have spent studying the Internet have paid the dividends of structure and clarity that one would hope.

The problem is that technology has not stood still in the meantime.  The very same institutions that developed the Internet, from the military to household-name Internet companies like Google and Amazon, have initiated a significant shift toward a new transformative technology: robotics.  The word “significant” is actually pretty conservative: these institutions are investing, collectively, hundreds of billions of dollars in robotics and artificial intelligence.  People like the Editor-in-Chief of Wired Magazine—arguably the publication of record for the digital revolution—are quitting to found robotics companies.  Dozens of states now have robot-specific laws.

What do we as academics and jurists make of this shift?  It seems to me, at least, that robotics has a distinct set of essential qualities than the Internet and, therefore, will raise a novel questions of law and policy.  If anything, I see robotics as departing even more abruptly from the Internet than did the Internet from personal computers and telephony.  In a new draft article, I explore in detail how I think cyberlaw (and law in general) will change with the ascendance of robotics as a commercial, social, and cultural force.  I am particularly interested in whether cyberlaw—with its peculiar brand of interdisciplinary pragmatism—remains the proper intellectual house for the study of this new transformative technology.

I follow robotics pretty closely but I don’t purport to have all the answers.  Perhaps I have overstated the importance or robotics, misdiagnosed its likely impact, or otherwise selected an unwise path forward.   I hope you read the paper and let me know.