Category: Cyberlaw

0

Rent books on Amazon? Hmm.

As I work away on 3D printing I am looking at regulation literature. Ayres and Braithwaite’s Responsive Regulation is available on Amazon for 34.99 for Kindle or you can rent it starting at $14.73 (no kidding, it is that precise). There is a calendar and you can select the length of the rental (3 months comes out to $22.30 and to Amazon’s credit hover over a date and the price appears rather than having to click each date). On the one hand this offering seems rather nifty. Yet I wonder what arguments about market availability and fair use will be made with this sort of rental model for books in play. And this option brings us one step closer to perfect price discrimination. Would I see the same rental price as someone else? Would I need some research assistant to rent for me? Would that person’s price model be forever altered based on some brief period of working for a professor? What about librarians who rent books for work (I suppose work accounts would be differentiated but the overlap between interests may shift what that person sees on a personal account too). Perhaps Ayres and Braithwaite’s regulation pyramid is needed yet again.

1

Secret Adjudications: the No Fly List, the Right to International Air Travel, and Procedural Justice?

Airplane_silhouette_SLatif v. Holder concerns the procedures owed individuals denied the right to travel internationally due to their inclusion in the Terrorist Screening database. Thirteen individuals sued the FBI, which maintains the No Fly list and the Terrorist Screening database. Four plaintiffs are veterans of the armed forces; others just have Muslim sounding names. All of the plaintiffs are U.S. citizens or lawful residents. The plaintiffs’ stories are varied but follow a similar trajectory. One plaintiff, a U.S. Army veteran, was not allowed to return to the U.S. from Colombia after visiting his wife’s relatives. Because he could not fly to the U.S., he missed a medical exam required for his new job. The employer rescinded his offer. Another plaintiff, a U.S. Air Force veteran, was in Ireland visiting his wife. He spent four months trying desperately to return to Boston. Denied the right to travel internationally, the thirteen plaintiffs lost jobs, business opportunities, and disability benefits. Important family events were missed. The plaintiffs could not travel to perform religious duties like the hajj. Some plaintiffs were allegedly told that they could regain their right to fly if they served as informants or told “what they knew,” but that option was unhelpful because they had nothing to offer federal officials. Plaintiffs outside the U.S. were allowed to return to their homes on a one-time pass. When back in the U.S., they turned to the TSA’s redress process (calling it “process” seems bizarre). The process involves filling out a form that describes their inability to travel and sending it via DHS to the Terrorist Screening Center. The Terrorist Screening Center says that it reviews the information to determine if the person’s name is an exact match of someone included in the terrorist database or No Fly list. All of the plaintiffs filed redress claims; all received DHS determination letters that neither confirmed nor denied their inclusion on the list. The letters basically told the plaintiffs nothing–they essentially said, we reviewed your claim, and we cannot tell you our determination. 

The plaintiffs sued the federal government on procedural due process and APA grounds. They argued that the DHS, FBI, and TSA deprived them of their right to procedural due process by failing to give them post deprivation notice or a meaningful chance to contest their inclusion in the terrorist database or No Fly list, which they have to presume as a factual matter based on their inability to travel though some of the plaintiffs were told informally that they appeared on the No Fly list. The standard Mathews v. Eldridge analysis determines the nature of the due process hearings owed individuals whose life, liberty, or property is threatened by agency action. Under Mathews, courts weigh the value of the person’s threatened interest, the risk of erroneous deprivation and the probable benefit of additional or substitute procedures, and the government’s asserted interests, including national security concerns and the cost of additional safeguards.

Most recently, the judge partially granted plaintiffs’ summary judgment motion, ordering further briefing set for September 9. In the August ruling, plaintiffs were victorious in important respects. The judge found that plaintiffs had a constitutionally important interest at stake: the right to fly internationally. As the judge explained, plaintiffs had been totally banned from flying internationally, which effectively meant that they could not travel in or out of the U.S. They were not merely inconvenienced. None could take a train or car to their desired destinations. Some had great difficulty returning to the U.S. by other means, including boat, because the No Fly list is shared with 22 foreign countries and U.S. Customs and Border Patrol. Having the same name as someone flagged as a terrorist (or the same name of a misspelling or mistranslation) can mean not being able to travel internationally. Period. The court also held that the federal government interfered with another constitutionally important interest — what the Court has called “stigma plus,” harm to reputation plus interference with travel. She might also have said property given the jobs and benefits lost amounted to the plus deprivation. That takes care of the first Mathews factor. Now for the second. The court assessed the risk of erroneous deprivation under the current DHS Redress process. On that point, the court noted that it’s hard to imagine how the plaintiffs had any chance to ensure that DHS got it right because they never got notice if they were on the list or why if they indeed were included. Plaintiffs had no chance to explain their side of the story or to correct misinformation held by the government–what misinformation or inaccuracy was unknown to them. In the recent “trust us” theme all too familiar these days, defendants argued that the risk of error is minute because the database is updated daily, officials regularly review and audit the list, and nomination to the list must be reviewed by TSC personnel. To that, the court recognized, the DOJ’s own Inspector General had criticized the No Fly list in 2007 and in 2012 as riddled with errors. Defendants also contended that plaintiffs could seek judicial review as proof that the risk of erroneous deprivation was small. The court pushed off making a determination on the risk of erroneous deprivation and valued of added procedures because it could not evaluate the defendants’ claim that plaintiffs could theoretically seek judicial review of determinations on which they have no notice. Defendants apparently conceded that there were no known appellate decisions providing meaningful judicial review. The court required the defendants to provide more briefing on the reality of that possibility, which I must say seems difficult if not impossible for plaintiffs to pursue. Because the court could not weigh the second factor, she could not balance the first two considerations against the government’s interest.

I will have more to say about the decision tomorrow. The process provided seems Kafka-esque. It’s hard to imagine what the defendants will file that the public can possibly learn. The briefing will surely be submitted for in camera review. Details of the process may be deemed classified. If so, defendants may invoke the state secrets doctrine to stop the court’s ever meaningfully addressing the rest of the summary judgment motion. It would not be the first time that the federal government invoked the state secrets doctrine to cover up embarrassing details of mismanagement. Since its beginnings, the state secrets doctrine has done just that. The parties were supposed to provide the court a status update today. More soon.

2

Stanford Law Review Online: Privacy and Big Data

Stanford Law Review

The Stanford Law Review Online has just published a Symposium of articles entitled Privacy and Big Data.

Although the solutions to many modern economic and societal challenges may be found in better understanding data, the dramatic increase in the amount and variety of data collection poses serious concerns about infringements on privacy. In our 2013 Symposium Issue, experts weigh in on these important questions at the intersection of big data and privacy.

Read the full articles, Privacy and Big Data at the Stanford Law Review Online.

 

2

Predictive Policing and Technological Due Process

Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.

What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.

H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.

0

The Problems and Promise with Terms of Use as the Chaperone of the Social Web

electric_fenceThe New Republic recently published a piece by Jeffrey Rosen titled “The Delete Squad: Google, Twitter, Facebook, and the New Global Battle Over the Future of Free Speech.” In it, Rosen provides an interesting account of how the content policies of many major websites were developed and how influential those policies are for online expression.  The New York Times has a related article about the mounting pressures for Facebook to delete offensive material.

Both articles raise important questions about the proper role of massive information intermediaries with respect to content deletion, but they also hint at a related problem: Facebook and other large websites often have vague restrictions on user behavior in their terms of use that are so expansive as to cover most aspects of interaction on the social web. In essence, these agreements allow intermediaries to serve as a chaperone on the field trip that is our electronically-mediated social experience.

Read More

1

Probabilistic Crime Solving

In our Big Data age, policing may shift its focus away from catching criminals to stopping crime from happening. That might sound like Hollywood “Minority Report” fantasy but not to researchers hoping to leverage data to identify future crime areas. Consider as an illustration a research project sponsored by Rutgers Center on Public Security. According to Government Technology, Rutgers professors have obtained a two-year $500,000 grant to conduct “risk terrain modeling” research in U.S. cities. Working with police forces in Arlington, Texas, Chicago, Colorado Springs, Colorado, Glendale, Arizona, Kansas City, Missouri, and Newark, New Jersey, the team will analyze an area’s history of crime with data on “local behavioral and physical characteristics” to identify locations with the greatest crime risk. As Professor Joel Caplan explains, data analysis “paints a picture of those underlying features of the environment that are attractive for certain types of illegal behavior, and in doing so, we’re able to assign probabilities of crime occurring.” Criminals tend to shift criminal activity to different locations to evade detection. The hope is to detect the criminals’ next move before they get there. Mapping techniques will systematize what is now just a matter of instinct or guess work, explain researchers.

Will reactive policing give way to predictive policing? Will police departments someday staff officers outside probabilistic targets to prevent criminals from ever acting on criminal designs? The data inputs and algorithms are crucial to the success of any Big Data endeavor. Before diving head long, we ought to ask about the provenance of the “local behavioral and physical characteristics” data. Will researchers be given access to live feeds from CCTV cameras and data broker dossiers? Will they be mining public and private sector databases along the lines of fusion centers? Because these projects involve state actors who are neither bound by the federal Privacy Act of 1974 nor federal restrictions on the collection of personal data, do state privacy laws limit the sorts of data that can be collected, analyzed, and shared? Does the Fourth Amendment have a role in such predictive policing? Is this project just the beginning of a system in which citizens receive criminal score risk assessments? The time is certainly ripe to talk more seriously about “technological due process” and the “right to quantitative privacy” for the surveillance age.

3

Employers and Schools that Demand Account Passwords and the Future of Cloud Privacy

Passwords 01In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.

I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.

But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.

The Widespread Hunger for Access

Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”

Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”

In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”

Read More

3

Tumblr, Porn, and Internet Intermediaries

In the hubbub surrounding this week’s acquisition of the blogging platform Tumblr by born-again internet hub Yahoo!, I thought one of the most interesting observations concerned the regulation of pornography. It led, by a winding path, to a topic near and dear to the Concurring Opinions gang: Section 230 of the Communications Decency Act, which generally immunizes online intermediaries from liability for the contents of user-generated content. (Just a few examples of many ConOp discussions of Section 230: this old post by Dan Solove and a January 2013 series of posts by Danielle Citron on Section 230 and revenge porn here, here, and here.)

Apparently Tumblr has a very large amount of NSFW material compared to other sites with user-generated content. By one estimate, over 11% of the site’s 200,000 most popular blogs are “adult.” By my math that’s well over 20,000 of the site’s power users.

Predictably, much of the ensuing discussion focused on the implications of all that smut for business and branding. But Peter Kafka explains on All Things D that the structure of Tumblr prevents advertisements for family-friendly brands from showing up next to pornographic content. His reassuring tone almost let you hear the “whew” from Yahoo! investors (as if harm to brands is the only relevant consideration about porn — which, for many tech journalists and entrepreneurs, it is).

There is another potential porn problem besides bad PR, and it is legal. Lux Alptraum, writing in Fast Company, addressed it.  (The author is, according to her bio, “a writer, sex educator, and CEO of Fleshbot, the web’s foremost blog about sexuality and adult entertainment.”) She somewhat conflates two different issues — understandably, since they are related — but that’s part of what I think is interesting. A lot of that user-posted porn is violating copyright law, or regulations meant to protect minors from exploitation, or both. To what extent might Tumblr be on the hook for those violations?

Read More

0

UCLA Law Review Vol. 60, Discourse

Volume 60, Discourse
Discourse

Reflections on Sexual Liberty and Equality: “Through Seneca Falls and Selma and Stonewall” Nan D. Hunter 172
Framing (In)Equality for Same-Sex Couples Douglas NeJaime 184
The Uncertain Relationship Between Open Data and Accountability: A Response to Yu and Robinson’s The New Ambiguity of “Open Government” Tiago Peixoto 200
Self-Congratulation and Scholarship Paul Campos 214
23

Computer Crime Law Goes to the Casino

Wired’s Kevin Poulsen has a great story whose title tells it all: Use a Software Bug to Win Video Poker? That’s a Federal Hacking Case. Two alleged video-poker cheats, John Kane and Andre Nestor, are being prosecuted under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030. Theirs is a hard case, and it is hard in a way that illustrates why all CFAA cases are hard.

Read More