Site Meter

Category: Cyberlaw

0

Neutrality or Nirvana?

Trade law should not allow countries to insist on a regulatory nirvana in cyberspace unmatched in real space.

Reading Anupam Chander’s The Electronic Silk Road has been a real treat, and thanks to the folks at Concurring Opinions for organizing this terrific online symposium and including me. The book offers a wide-ranging and insightful discussion about global electronic commerce and its regulation and management. Anupam proposes general principles—rules of the road, essentially—to guide policymakers in this process of regulating and managing global e-commerce. The very first principle introduced in the book–the quotation above captures its essence–is that of technological neutrality: To keep cybertrade free and open, the online provision of a service should not be subject to more onerous regulatory burdens than its offline counterpart.

I wish to focus on this first principle. It seems a balanced and uncontroversial prescription. Why should local regulators saddle online service providers with heavier regulatory burdens than the local bricks-and-mortar competitors? The specter of protectionism lurks!

For me, Anupam’s technological neutrality principle is insufficiently ambitious with respect to the possibilities for effective regulation of e-commerce. Anupam’s concerns are free trade concerns, with which I am sympathetic. At the same time, though, e-commerce may actually be able to do better than brick and mortar on a number of important regulatory fronts, but technological neutrality gives up on those possibilities. It relieves the pressure to pursue more efficient regulation in cyberspace.

Read More

0

Opportunities and Roadblocks Along the Electronic Silk Road

977574_288606077943048_524618202_oLast week, Foreign Affairs posted a note about my book, The Electronic Silk Road, on its Facebook page. In the comments, some clever wag asked, “Didn’t the FBI shut this down a few weeks ago?” In other venues as well, as I have shared portions of my book across the web, individuals across the world have written back, sometimes applauding and at other times challenging my claims. My writing itself has journed across the world–when I adapted part of a chapter as “How Censorship Hurts Chinese Internet Companies” for The Atlantic, the China Daily republished it. The Financial Times published its review of the book in both English and Chinese.

International trade was involved in even these posts. Much of this activity involved websites—from Facebook, to The Atlantic, and the Financial Times, each of them earning revenue in part from cross-border advertising (even the government-owned China Daily is apparently under pressure to increase advertising) . In the second quarter of 2013, for example, Facebook earned the majority of its revenues outside the United States–$995 million out of a total of $1,813 million, or 55 percent of revenues.

But this trade also brought communication—with ideas and critiques circulated around the world.  The old silk roads similarly were passages not only for goods, but knowledge. They helped shape our world, not only materially, but spiritually, just as the mix of commerce and communication on the Electronic Silk Road will reshape the world to come.

Read More

1

Upcoming Online Symposium on Professor Anupam Chander’s The Electronic Silk Road

Silk Road coverDanielle and I are happy to announce that next week, Concurring Opinions will host an online symposium on Professor Anupam Chander’s The Electronic Silk Road: How the Web Binds the World Together in Commerce. Professor Chander is a professor at U.C. Davis’s King Hall School of Law. Senators, academics, trade representatives, and pundits laud the book for its clarity and the argument Professor Chander makes. He examines how the law can facilitate commerce by reducing trade barriers but argues that consumer interests need not be sacrificed:

On the ancient Silk Road, treasure-laden caravans made their arduous way through deserts and mountain passes, establishing trade between Asia and the civilizations of Europe and the Mediterranean. Today’s electronic Silk Roads ferry information across continents, enabling individuals and corporations anywhere to provide or receive services without obtaining a visa. But the legal infrastructure for such trade is yet rudimentary and uncertain. If an event in cyberspace occurs at once everywhere and nowhere, what law applies? How can consumers be protected when engaging with companies across the world?

But will the book hold up under our panel’s scrutiny? I think so but only after some probing and dialogue.

Our Panelists include Professor Chander as well as:

Paul Berman

Miriam Cherry

Graeme Dinwoodie

Nicklas Lundblad

Frank Pasquale

Pierluigi Perri

Adam Thierer

Haochen Sun

Fred Tung

And of course

Danielle Citron and I will be there too.

41

Legal Developments in Revenge Porn: An Interview with Mary Anne Franks

A handful of state legislatures have recently passed or considered some different proposed bills to address the harm of non-consensual pornography (often called ‘revenge porn’). The topic of revenge porn raises important questions about privacy, civil rights, and online speech and harassment.

Law professor Mary Anne Franks has written previously on the topic in multiple venues, including in guest posts at Concurring Opinions. We were pleased to catch up with her recently to discuss the latest developments. Our interview follows:

**

Hi, Mary Anne! Thanks so much for joining us for an interview. This is a really interesting topic, and we’re glad to get your take on it.

I am delighted to be here! Thank you for having me.

Okay, some substantive questions. First, what is ‘revenge porn’? Read More

0

Rent books on Amazon? Hmm.

As I work away on 3D printing I am looking at regulation literature. Ayres and Braithwaite’s Responsive Regulation is available on Amazon for 34.99 for Kindle or you can rent it starting at $14.73 (no kidding, it is that precise). There is a calendar and you can select the length of the rental (3 months comes out to $22.30 and to Amazon’s credit hover over a date and the price appears rather than having to click each date). On the one hand this offering seems rather nifty. Yet I wonder what arguments about market availability and fair use will be made with this sort of rental model for books in play. And this option brings us one step closer to perfect price discrimination. Would I see the same rental price as someone else? Would I need some research assistant to rent for me? Would that person’s price model be forever altered based on some brief period of working for a professor? What about librarians who rent books for work (I suppose work accounts would be differentiated but the overlap between interests may shift what that person sees on a personal account too). Perhaps Ayres and Braithwaite’s regulation pyramid is needed yet again.

1

Secret Adjudications: the No Fly List, the Right to International Air Travel, and Procedural Justice?

Airplane_silhouette_SLatif v. Holder concerns the procedures owed individuals denied the right to travel internationally due to their inclusion in the Terrorist Screening database. Thirteen individuals sued the FBI, which maintains the No Fly list and the Terrorist Screening database. Four plaintiffs are veterans of the armed forces; others just have Muslim sounding names. All of the plaintiffs are U.S. citizens or lawful residents. The plaintiffs’ stories are varied but follow a similar trajectory. One plaintiff, a U.S. Army veteran, was not allowed to return to the U.S. from Colombia after visiting his wife’s relatives. Because he could not fly to the U.S., he missed a medical exam required for his new job. The employer rescinded his offer. Another plaintiff, a U.S. Air Force veteran, was in Ireland visiting his wife. He spent four months trying desperately to return to Boston. Denied the right to travel internationally, the thirteen plaintiffs lost jobs, business opportunities, and disability benefits. Important family events were missed. The plaintiffs could not travel to perform religious duties like the hajj. Some plaintiffs were allegedly told that they could regain their right to fly if they served as informants or told “what they knew,” but that option was unhelpful because they had nothing to offer federal officials. Plaintiffs outside the U.S. were allowed to return to their homes on a one-time pass. When back in the U.S., they turned to the TSA’s redress process (calling it “process” seems bizarre). The process involves filling out a form that describes their inability to travel and sending it via DHS to the Terrorist Screening Center. The Terrorist Screening Center says that it reviews the information to determine if the person’s name is an exact match of someone included in the terrorist database or No Fly list. All of the plaintiffs filed redress claims; all received DHS determination letters that neither confirmed nor denied their inclusion on the list. The letters basically told the plaintiffs nothing–they essentially said, we reviewed your claim, and we cannot tell you our determination. 

The plaintiffs sued the federal government on procedural due process and APA grounds. They argued that the DHS, FBI, and TSA deprived them of their right to procedural due process by failing to give them post deprivation notice or a meaningful chance to contest their inclusion in the terrorist database or No Fly list, which they have to presume as a factual matter based on their inability to travel though some of the plaintiffs were told informally that they appeared on the No Fly list. The standard Mathews v. Eldridge analysis determines the nature of the due process hearings owed individuals whose life, liberty, or property is threatened by agency action. Under Mathews, courts weigh the value of the person’s threatened interest, the risk of erroneous deprivation and the probable benefit of additional or substitute procedures, and the government’s asserted interests, including national security concerns and the cost of additional safeguards.

Most recently, the judge partially granted plaintiffs’ summary judgment motion, ordering further briefing set for September 9. In the August ruling, plaintiffs were victorious in important respects. The judge found that plaintiffs had a constitutionally important interest at stake: the right to fly internationally. As the judge explained, plaintiffs had been totally banned from flying internationally, which effectively meant that they could not travel in or out of the U.S. They were not merely inconvenienced. None could take a train or car to their desired destinations. Some had great difficulty returning to the U.S. by other means, including boat, because the No Fly list is shared with 22 foreign countries and U.S. Customs and Border Patrol. Having the same name as someone flagged as a terrorist (or the same name of a misspelling or mistranslation) can mean not being able to travel internationally. Period. The court also held that the federal government interfered with another constitutionally important interest — what the Court has called “stigma plus,” harm to reputation plus interference with travel. She might also have said property given the jobs and benefits lost amounted to the plus deprivation. That takes care of the first Mathews factor. Now for the second. The court assessed the risk of erroneous deprivation under the current DHS Redress process. On that point, the court noted that it’s hard to imagine how the plaintiffs had any chance to ensure that DHS got it right because they never got notice if they were on the list or why if they indeed were included. Plaintiffs had no chance to explain their side of the story or to correct misinformation held by the government–what misinformation or inaccuracy was unknown to them. In the recent “trust us” theme all too familiar these days, defendants argued that the risk of error is minute because the database is updated daily, officials regularly review and audit the list, and nomination to the list must be reviewed by TSC personnel. To that, the court recognized, the DOJ’s own Inspector General had criticized the No Fly list in 2007 and in 2012 as riddled with errors. Defendants also contended that plaintiffs could seek judicial review as proof that the risk of erroneous deprivation was small. The court pushed off making a determination on the risk of erroneous deprivation and valued of added procedures because it could not evaluate the defendants’ claim that plaintiffs could theoretically seek judicial review of determinations on which they have no notice. Defendants apparently conceded that there were no known appellate decisions providing meaningful judicial review. The court required the defendants to provide more briefing on the reality of that possibility, which I must say seems difficult if not impossible for plaintiffs to pursue. Because the court could not weigh the second factor, she could not balance the first two considerations against the government’s interest.

I will have more to say about the decision tomorrow. The process provided seems Kafka-esque. It’s hard to imagine what the defendants will file that the public can possibly learn. The briefing will surely be submitted for in camera review. Details of the process may be deemed classified. If so, defendants may invoke the state secrets doctrine to stop the court’s ever meaningfully addressing the rest of the summary judgment motion. It would not be the first time that the federal government invoked the state secrets doctrine to cover up embarrassing details of mismanagement. Since its beginnings, the state secrets doctrine has done just that. The parties were supposed to provide the court a status update today. More soon.

2

Stanford Law Review Online: Privacy and Big Data

Stanford Law Review

The Stanford Law Review Online has just published a Symposium of articles entitled Privacy and Big Data.

Although the solutions to many modern economic and societal challenges may be found in better understanding data, the dramatic increase in the amount and variety of data collection poses serious concerns about infringements on privacy. In our 2013 Symposium Issue, experts weigh in on these important questions at the intersection of big data and privacy.

Read the full articles, Privacy and Big Data at the Stanford Law Review Online.

 

2

Predictive Policing and Technological Due Process

Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.

What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.

H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.

0

The Problems and Promise with Terms of Use as the Chaperone of the Social Web

electric_fenceThe New Republic recently published a piece by Jeffrey Rosen titled “The Delete Squad: Google, Twitter, Facebook, and the New Global Battle Over the Future of Free Speech.” In it, Rosen provides an interesting account of how the content policies of many major websites were developed and how influential those policies are for online expression.  The New York Times has a related article about the mounting pressures for Facebook to delete offensive material.

Both articles raise important questions about the proper role of massive information intermediaries with respect to content deletion, but they also hint at a related problem: Facebook and other large websites often have vague restrictions on user behavior in their terms of use that are so expansive as to cover most aspects of interaction on the social web. In essence, these agreements allow intermediaries to serve as a chaperone on the field trip that is our electronically-mediated social experience.

Read More

1

Probabilistic Crime Solving

In our Big Data age, policing may shift its focus away from catching criminals to stopping crime from happening. That might sound like Hollywood “Minority Report” fantasy but not to researchers hoping to leverage data to identify future crime areas. Consider as an illustration a research project sponsored by Rutgers Center on Public Security. According to Government Technology, Rutgers professors have obtained a two-year $500,000 grant to conduct “risk terrain modeling” research in U.S. cities. Working with police forces in Arlington, Texas, Chicago, Colorado Springs, Colorado, Glendale, Arizona, Kansas City, Missouri, and Newark, New Jersey, the team will analyze an area’s history of crime with data on “local behavioral and physical characteristics” to identify locations with the greatest crime risk. As Professor Joel Caplan explains, data analysis “paints a picture of those underlying features of the environment that are attractive for certain types of illegal behavior, and in doing so, we’re able to assign probabilities of crime occurring.” Criminals tend to shift criminal activity to different locations to evade detection. The hope is to detect the criminals’ next move before they get there. Mapping techniques will systematize what is now just a matter of instinct or guess work, explain researchers.

Will reactive policing give way to predictive policing? Will police departments someday staff officers outside probabilistic targets to prevent criminals from ever acting on criminal designs? The data inputs and algorithms are crucial to the success of any Big Data endeavor. Before diving head long, we ought to ask about the provenance of the “local behavioral and physical characteristics” data. Will researchers be given access to live feeds from CCTV cameras and data broker dossiers? Will they be mining public and private sector databases along the lines of fusion centers? Because these projects involve state actors who are neither bound by the federal Privacy Act of 1974 nor federal restrictions on the collection of personal data, do state privacy laws limit the sorts of data that can be collected, analyzed, and shared? Does the Fourth Amendment have a role in such predictive policing? Is this project just the beginning of a system in which citizens receive criminal score risk assessments? The time is certainly ripe to talk more seriously about “technological due process” and the “right to quantitative privacy” for the surveillance age.