Archive for the ‘Cyberlaw’ Category
posted by Deven Desai
As I saw that Amazon is tinkering with drone delivery, I thought “How very Stephenson” and that the opening of Snow Crash tracked the idea of 30 minutes or less delivery. Of course, others thought of this connection overnight. And although Fox News hyped the idea as the Senate holding hearings on Amazon and Drones (“Senate to hold hearing to discuss Amazon package delivery drones“), the hearings were already in place as Fox reports. The Amazon glory is icing on the cake of let’s freak out about drones. And, yes, there are reasons to think about drones and what, if anything, should be done to regulate them. In this post I am more interested in the labor issues. Chris Taylor’s thoughts at Mashable get into this question. There are many limits to the tech. But as I wrote before, Amazon strikes me as well-placed to press into new ways to use this sort of technology to reduce its labor needs. Local distribution sites, same day or now maybe within an hour delivery, maybe on-demand printing of books (or 3D things), and Amazon could yet again change shopping. The Supreme Court declined to hear the case about forcing retailers to collect taxes even when they have no presence in a state. Amazon’s response of moving into states and taking on local retailers may prove to increase competition locally and in an ironic twist the idea that imposing taxes would be fair may prove to be what eats at local businesses more than expected.
posted by Fred Tung
Trade law should not allow countries to insist on a regulatory nirvana in cyberspace unmatched in real space.
Reading Anupam Chander’s The Electronic Silk Road has been a real treat, and thanks to the folks at Concurring Opinions for organizing this terrific online symposium and including me. The book offers a wide-ranging and insightful discussion about global electronic commerce and its regulation and management. Anupam proposes general principles—rules of the road, essentially—to guide policymakers in this process of regulating and managing global e-commerce. The very first principle introduced in the book–the quotation above captures its essence–is that of technological neutrality: To keep cybertrade free and open, the online provision of a service should not be subject to more onerous regulatory burdens than its offline counterpart.
I wish to focus on this first principle. It seems a balanced and uncontroversial prescription. Why should local regulators saddle online service providers with heavier regulatory burdens than the local bricks-and-mortar competitors? The specter of protectionism lurks!
For me, Anupam’s technological neutrality principle is insufficiently ambitious with respect to the possibilities for effective regulation of e-commerce. Anupam’s concerns are free trade concerns, with which I am sympathetic. At the same time, though, e-commerce may actually be able to do better than brick and mortar on a number of important regulatory fronts, but technological neutrality gives up on those possibilities. It relieves the pressure to pursue more efficient regulation in cyberspace.
posted by Anupam Chander
Last week, Foreign Affairs posted a note about my book, The Electronic Silk Road, on its Facebook page. In the comments, some clever wag asked, “Didn’t the FBI shut this down a few weeks ago?” In other venues as well, as I have shared portions of my book across the web, individuals across the world have written back, sometimes applauding and at other times challenging my claims. My writing itself has journed across the world–when I adapted part of a chapter as “How Censorship Hurts Chinese Internet Companies” for The Atlantic, the China Daily republished it. The Financial Times published its review of the book in both English and Chinese.
International trade was involved in even these posts. Much of this activity involved websites—from Facebook, to The Atlantic, and the Financial Times, each of them earning revenue in part from cross-border advertising (even the government-owned China Daily is apparently under pressure to increase advertising) . In the second quarter of 2013, for example, Facebook earned the majority of its revenues outside the United States–$995 million out of a total of $1,813 million, or 55 percent of revenues.
But this trade also brought communication—with ideas and critiques circulated around the world. The old silk roads similarly were passages not only for goods, but knowledge. They helped shape our world, not only materially, but spiritually, just as the mix of commerce and communication on the Electronic Silk Road will reshape the world to come.
October 28, 2013 at 5:46 pm Posted in: Consumer Protection Law, Cyberlaw, First Amendment, Intellectual Property, International & Comparative Law, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Symposium (The Electronic Silk Road) Print This Post No Comments
posted by Deven Desai
Danielle and I are happy to announce that next week, Concurring Opinions will host an online symposium on Professor Anupam Chander’s The Electronic Silk Road: How the Web Binds the World Together in Commerce. Professor Chander is a professor at U.C. Davis’s King Hall School of Law. Senators, academics, trade representatives, and pundits laud the book for its clarity and the argument Professor Chander makes. He examines how the law can facilitate commerce by reducing trade barriers but argues that consumer interests need not be sacrificed:
On the ancient Silk Road, treasure-laden caravans made their arduous way through deserts and mountain passes, establishing trade between Asia and the civilizations of Europe and the Mediterranean. Today’s electronic Silk Roads ferry information across continents, enabling individuals and corporations anywhere to provide or receive services without obtaining a visa. But the legal infrastructure for such trade is yet rudimentary and uncertain. If an event in cyberspace occurs at once everywhere and nowhere, what law applies? How can consumers be protected when engaging with companies across the world?
But will the book hold up under our panel’s scrutiny? I think so but only after some probing and dialogue.
Our Panelists include Professor Chander as well as:
And of course
Danielle Citron and I will be there too.
October 21, 2013 at 3:40 pm Posted in: Cyberlaw, DRM, Innovation, Intellectual Property, Political Economy, Privacy, Symposium (The Electronic Silk Road), Technology, Trade, Web 2.0 Print This Post One Comment
posted by Kaimipono D. Wenger
A handful of state legislatures have recently passed or considered some different proposed bills to address the harm of non-consensual pornography (often called ‘revenge porn’). The topic of revenge porn raises important questions about privacy, civil rights, and online speech and harassment.
Law professor Mary Anne Franks has written previously on the topic in multiple venues, including in guest posts at Concurring Opinions. We were pleased to catch up with her recently to discuss the latest developments. Our interview follows:
Hi, Mary Anne! Thanks so much for joining us for an interview. This is a really interesting topic, and we’re glad to get your take on it.
I am delighted to be here! Thank you for having me.
Okay, some substantive questions. First, what is ‘revenge porn’? Read the rest of this post »
posted by Deven Desai
As I work away on 3D printing I am looking at regulation literature. Ayres and Braithwaite’s Responsive Regulation is available on Amazon for 34.99 for Kindle or you can rent it starting at $14.73 (no kidding, it is that precise). There is a calendar and you can select the length of the rental (3 months comes out to $22.30 and to Amazon’s credit hover over a date and the price appears rather than having to click each date). On the one hand this offering seems rather nifty. Yet I wonder what arguments about market availability and fair use will be made with this sort of rental model for books in play. And this option brings us one step closer to perfect price discrimination. Would I see the same rental price as someone else? Would I need some research assistant to rent for me? Would that person’s price model be forever altered based on some brief period of working for a professor? What about librarians who rent books for work (I suppose work accounts would be differentiated but the overlap between interests may shift what that person sees on a personal account too). Perhaps Ayres and Braithwaite’s regulation pyramid is needed yet again.
Secret Adjudications: the No Fly List, the Right to International Air Travel, and Procedural Justice?
posted by Danielle Citron
Latif v. Holder concerns the procedures owed individuals denied the right to travel internationally due to their inclusion in the Terrorist Screening database. Thirteen individuals sued the FBI, which maintains the No Fly list and the Terrorist Screening database. Four plaintiffs are veterans of the armed forces; others just have Muslim sounding names. All of the plaintiffs are U.S. citizens or lawful residents. The plaintiffs’ stories are varied but follow a similar trajectory. One plaintiff, a U.S. Army veteran, was not allowed to return to the U.S. from Colombia after visiting his wife’s relatives. Because he could not fly to the U.S., he missed a medical exam required for his new job. The employer rescinded his offer. Another plaintiff, a U.S. Air Force veteran, was in Ireland visiting his wife. He spent four months trying desperately to return to Boston. Denied the right to travel internationally, the thirteen plaintiffs lost jobs, business opportunities, and disability benefits. Important family events were missed. The plaintiffs could not travel to perform religious duties like the hajj. Some plaintiffs were allegedly told that they could regain their right to fly if they served as informants or told “what they knew,” but that option was unhelpful because they had nothing to offer federal officials. Plaintiffs outside the U.S. were allowed to return to their homes on a one-time pass. When back in the U.S., they turned to the TSA’s redress process (calling it “process” seems bizarre). The process involves filling out a form that describes their inability to travel and sending it via DHS to the Terrorist Screening Center. The Terrorist Screening Center says that it reviews the information to determine if the person’s name is an exact match of someone included in the terrorist database or No Fly list. All of the plaintiffs filed redress claims; all received DHS determination letters that neither confirmed nor denied their inclusion on the list. The letters basically told the plaintiffs nothing–they essentially said, we reviewed your claim, and we cannot tell you our determination.
The plaintiffs sued the federal government on procedural due process and APA grounds. They argued that the DHS, FBI, and TSA deprived them of their right to procedural due process by failing to give them post deprivation notice or a meaningful chance to contest their inclusion in the terrorist database or No Fly list, which they have to presume as a factual matter based on their inability to travel though some of the plaintiffs were told informally that they appeared on the No Fly list. The standard Mathews v. Eldridge analysis determines the nature of the due process hearings owed individuals whose life, liberty, or property is threatened by agency action. Under Mathews, courts weigh the value of the person’s threatened interest, the risk of erroneous deprivation and the probable benefit of additional or substitute procedures, and the government’s asserted interests, including national security concerns and the cost of additional safeguards.
Most recently, the judge partially granted plaintiffs’ summary judgment motion, ordering further briefing set for September 9. In the August ruling, plaintiffs were victorious in important respects. The judge found that plaintiffs had a constitutionally important interest at stake: the right to fly internationally. As the judge explained, plaintiffs had been totally banned from flying internationally, which effectively meant that they could not travel in or out of the U.S. They were not merely inconvenienced. None could take a train or car to their desired destinations. Some had great difficulty returning to the U.S. by other means, including boat, because the No Fly list is shared with 22 foreign countries and U.S. Customs and Border Patrol. Having the same name as someone flagged as a terrorist (or the same name of a misspelling or mistranslation) can mean not being able to travel internationally. Period. The court also held that the federal government interfered with another constitutionally important interest — what the Court has called “stigma plus,” harm to reputation plus interference with travel. She might also have said property given the jobs and benefits lost amounted to the plus deprivation. That takes care of the first Mathews factor. Now for the second. The court assessed the risk of erroneous deprivation under the current DHS Redress process. On that point, the court noted that it’s hard to imagine how the plaintiffs had any chance to ensure that DHS got it right because they never got notice if they were on the list or why if they indeed were included. Plaintiffs had no chance to explain their side of the story or to correct misinformation held by the government–what misinformation or inaccuracy was unknown to them. In the recent “trust us” theme all too familiar these days, defendants argued that the risk of error is minute because the database is updated daily, officials regularly review and audit the list, and nomination to the list must be reviewed by TSC personnel. To that, the court recognized, the DOJ’s own Inspector General had criticized the No Fly list in 2007 and in 2012 as riddled with errors. Defendants also contended that plaintiffs could seek judicial review as proof that the risk of erroneous deprivation was small. The court pushed off making a determination on the risk of erroneous deprivation and valued of added procedures because it could not evaluate the defendants’ claim that plaintiffs could theoretically seek judicial review of determinations on which they have no notice. Defendants apparently conceded that there were no known appellate decisions providing meaningful judicial review. The court required the defendants to provide more briefing on the reality of that possibility, which I must say seems difficult if not impossible for plaintiffs to pursue. Because the court could not weigh the second factor, she could not balance the first two considerations against the government’s interest.
I will have more to say about the decision tomorrow. The process provided seems Kafka-esque. It’s hard to imagine what the defendants will file that the public can possibly learn. The briefing will surely be submitted for in camera review. Details of the process may be deemed classified. If so, defendants may invoke the state secrets doctrine to stop the court’s ever meaningfully addressing the rest of the summary judgment motion. It would not be the first time that the federal government invoked the state secrets doctrine to cover up embarrassing details of mismanagement. Since its beginnings, the state secrets doctrine has done just that. The parties were supposed to provide the court a status update today. More soon.
posted by Stanford Law Review
Although the solutions to many modern economic and societal challenges may be found in better understanding data, the dramatic increase in the amount and variety of data collection poses serious concerns about infringements on privacy. In our 2013 Symposium Issue, experts weigh in on these important questions at the intersection of big data and privacy.
September 3, 2013 at 7:47 am Posted in: Behavioral Law and Economics, Constitutional Law, Criminal Law, Cyber Civil Rights, Cyberlaw, Empirical Analysis of Law, Intellectual Property, Law Rev (Stanford) Print This Post 2 Comments
posted by Danielle Citron
Police departments have been increasingly crunching data to identify criminal hot spots and to allocate policing resources to address them. Predictive policing has been around for a while without raising too many alarms. Given the daily proof that we live in a surveillance state, such policing seems downright quaint. Putting more police on the beat to address likely crime is smart. In such cases, software is not making predictive adjudications about particular individuals. Might someday governmental systems assign us risk ratings, predicting whether we are likely to commit crime? We certainly live in a scoring society. The private sector is madly scoring us. Individuals are denied the ability to open up bank accounts; they are identified as strong potential hires (or not); they are deemed “waste” not worthy of special advertising deals; and so on. Private actors don’t owe us any process, at least as far as the Constitution is concerned. On the other hand, if governmental systems make decisions about our property (perhaps licenses denied due to a poor scoring risk), liberty (watch list designations leading to liberty intrusions), and life (who knows with drones in the picture), due process concerns would be implicated.
What about systems aimed at predicting high-crime locations, not particular people? Do those systems raise the sorts of concerns I’ve discussed as Technological Due Process? A recent NPR story asked whether algorithmic predictions about high-risk locations can form the basis of a stop and frisk. If someone is in a hot zone, can that very fact amount to reasonable suspicion to stop someone in that zone? During the NPR segment, law professor Andrew Guthrie Ferguson talked about the possibility that the computer’s prediction about the location may inform an officer’s thinking. An officer might credit the computer’s prediction and view everyone in a particular zone a different way. Concerns about automation bias are real. Humans defer to systems: surely a computer’s judgment is more trustworthy given its neutrality and expertise? Fallible human beings, however, build the algorithms, investing them with bias, and the systems may be filled with incomplete and erroneous information. Given the reality of automated bias, police departments would be wise to train officers about automation bias, which has proven effective in other contexts. In the longer term, making pre-commitments to training would help avoid unconstitutional stops and wasted resources. The constitutional question of the reasonableness of the stop and frisk would of course be addressed on a retail level, but it would be worth providing wholesale protections to avoid wasting police time on unwarranted stops and arrests.
H/T: Thanks to guest blogger Ryan Calo for drawing my attention to the NPR story.
posted by Woodrow Hartzog
The New Republic recently published a piece by Jeffrey Rosen titled “The Delete Squad: Google, Twitter, Facebook, and the New Global Battle Over the Future of Free Speech.” In it, Rosen provides an interesting account of how the content policies of many major websites were developed and how influential those policies are for online expression. The New York Times has a related article about the mounting pressures for Facebook to delete offensive material.
posted by Danielle Citron
In our Big Data age, policing may shift its focus away from catching criminals to stopping crime from happening. That might sound like Hollywood “Minority Report” fantasy but not to researchers hoping to leverage data to identify future crime areas. Consider as an illustration a research project sponsored by Rutgers Center on Public Security. According to Government Technology, Rutgers professors have obtained a two-year $500,000 grant to conduct “risk terrain modeling” research in U.S. cities. Working with police forces in Arlington, Texas, Chicago, Colorado Springs, Colorado, Glendale, Arizona, Kansas City, Missouri, and Newark, New Jersey, the team will analyze an area’s history of crime with data on “local behavioral and physical characteristics” to identify locations with the greatest crime risk. As Professor Joel Caplan explains, data analysis “paints a picture of those underlying features of the environment that are attractive for certain types of illegal behavior, and in doing so, we’re able to assign probabilities of crime occurring.” Criminals tend to shift criminal activity to different locations to evade detection. The hope is to detect the criminals’ next move before they get there. Mapping techniques will systematize what is now just a matter of instinct or guess work, explain researchers.
Will reactive policing give way to predictive policing? Will police departments someday staff officers outside probabilistic targets to prevent criminals from ever acting on criminal designs? The data inputs and algorithms are crucial to the success of any Big Data endeavor. Before diving head long, we ought to ask about the provenance of the “local behavioral and physical characteristics” data. Will researchers be given access to live feeds from CCTV cameras and data broker dossiers? Will they be mining public and private sector databases along the lines of fusion centers? Because these projects involve state actors who are neither bound by the federal Privacy Act of 1974 nor federal restrictions on the collection of personal data, do state privacy laws limit the sorts of data that can be collected, analyzed, and shared? Does the Fourth Amendment have a role in such predictive policing? Is this project just the beginning of a system in which citizens receive criminal score risk assessments? The time is certainly ripe to talk more seriously about “technological due process” and the “right to quantitative privacy” for the surveillance age.
posted by Daniel Solove
In 2012, the media erupted with news about employers demanding employees provide them with their social media passwords so the employers could access their accounts. This news took many people by surprise, and it set off a firestorm of public outrage. It even sparked a significant legislative response in the states.
I thought that the practice of demanding passwords was so outrageous that it couldn’t be very common. What kind of company or organization would actually do this? I thought it was a fringe practice done by a few small companies without much awareness of privacy law.
But Bradley Shear, an attorney who has focused extensively on the issue, opened my eyes to the fact that the practice is much more prevalent than I had imagined, and it is an issue that has very important implications as we move more of our personal data to the Cloud.
The Widespread Hunger for Access
Employers are not the only ones demanding social media passwords – schools are doing so too, especially athletic departments in higher education, many of which engage in extensive monitoring of the online activities of student athletes. Some require students to turn over passwords, install special software and apps, or friend coaches on Facebook and other sites. According to an article in USA Today: “As a condition of participating in sports, the schools require athletes to agree to monitoring software being placed on their social media accounts. This software emails alerts to coaches whenever athletes use a word that could embarrass the student, the university or tarnish their images on services such as Twitter, Facebook, YouTube and MySpace.”
Not only are colleges and universities engaging in the practice, but K-12 schools are doing so as well. A MSNBC article discusses the case of a parent’s outrage over school officials demanding access to a 13-year old girl’s Facebook account. According to the mother, “The whole family is exposed in this. . . . Some families communicate through Facebook. What if her aunt was going through a divorce or had an illness? And now there’s these anonymous people reading through this information.”
In addition to private sector employers and schools, public sector employers such as state government agencies are demanding access to online accounts. According to another MSNBC article: “In Maryland, job seekers applying to the state’s Department of Corrections have been asked during interviews to log into their accounts and let an interviewer watch while the potential employee clicks through posts, friends, photos and anything else that might be found behind the privacy wall.”
June 3, 2013 at 10:51 am Posted in: Constitutional Law, Cyberlaw, Privacy, Privacy (Consumer Privacy), Privacy (Electronic Surveillance), Privacy (Gossip & Shaming), Social Network Websites Print This Post 3 Comments
posted by William McGeveran
In the hubbub surrounding this week’s acquisition of the blogging platform Tumblr by born-again internet hub Yahoo!, I thought one of the most interesting observations concerned the regulation of pornography. It led, by a winding path, to a topic near and dear to the Concurring Opinions gang: Section 230 of the Communications Decency Act, which generally immunizes online intermediaries from liability for the contents of user-generated content. (Just a few examples of many ConOp discussions of Section 230: this old post by Dan Solove and a January 2013 series of posts by Danielle Citron on Section 230 and revenge porn here, here, and here.)
Apparently Tumblr has a very large amount of NSFW material compared to other sites with user-generated content. By one estimate, over 11% of the site’s 200,000 most popular blogs are “adult.” By my math that’s well over 20,000 of the site’s power users.
Predictably, much of the ensuing discussion focused on the implications of all that smut for business and branding. But Peter Kafka explains on All Things D that the structure of Tumblr prevents advertisements for family-friendly brands from showing up next to pornographic content. His reassuring tone almost let you hear the “whew” from Yahoo! investors (as if harm to brands is the only relevant consideration about porn — which, for many tech journalists and entrepreneurs, it is).
There is another potential porn problem besides bad PR, and it is legal. Lux Alptraum, writing in Fast Company, addressed it. (The author is, according to her bio, “a writer, sex educator, and CEO of Fleshbot, the web’s foremost blog about sexuality and adult entertainment.”) She somewhat conflates two different issues — understandably, since they are related — but that’s part of what I think is interesting. A lot of that user-posted porn is violating copyright law, or regulations meant to protect minors from exploitation, or both. To what extent might Tumblr be on the hook for those violations?
posted by UCLA Law Review
Volume 60, Discourse
posted by James Grimmelmann
Wired’s Kevin Poulsen has a great story whose title tells it all: Use a Software Bug to Win Video Poker? That’s a Federal Hacking Case. Two alleged video-poker cheats, John Kane and Andre Nestor, are being prosecuted under the Computer Fraud and Abuse Act, 18 U.S.C. § 1030. Theirs is a hard case, and it is hard in a way that illustrates why all CFAA cases are hard.
posted by Aaron Saiger
Since I began posting as a guest on Concurring Opinions at the beginning of March, “MOOCs” – massively open online courses – have been repeated topic. The blog search engine reports that the term did not appear on the blog until 25 Feb 2013; in the six weeks since, MOOCs have been a topic here, here, here, here, here, here, and, in Deven Desai’s interesting post two days ago, here. Deven says, and I agree, that the aggregation of students together inside an immersive academic, learning community is a real good, and one that cannot be duplicated by a set of MOOCs. But the question MOOCs make pressing is how to value that good, once it can be unbundled from training in the classroom. Nannerl Keohane, in a recent review in Perspectives on Politics (11:1, March 2013, p.318), says that “online education … is the easiest and cheapest way to learn a variety of subjects, especially useful ones,” and describes it is the contemporary analogue of “mutual-aid societies and lyceums.” This seems apt.
University insiders like to say that even unbundled academic community is indispensable, and should be subsidized by both state and university. I suspect that the marketplace will put a much lower value on it. State legislators, ever strapped for cash, will likely do so as well. There will still be a market for 24/7, bricks-and-mortar academic communities; but the online availability of downmarket, imperfect, but genuine partial substitutes will mark such communities more clearly as luxury goods. Once such luxuries are no longer inexorably bundled with direct instruction, the argument that they still deserve state or even philanthropic subsidy is not, it seems to me, a slam-dunk.
Deven posted that the key question is how to “leverage MOOCs and other technology to improve the way education is delivered while not offering only the virtual world” but also social context to those not in the luxury-goods market. Another way of phrasing that question is to ask whether there is a mid-market good, somewhere between the aggregation of naked MOOCs and the bricks-and-mortar private college, that could command interest in the marketplace and justify third-party subsidies. What features of the “code” of online courses – the way that they are presented, taught, bundled together, and converted into credentials – might be adjusted to create a closer approximation of an immersive community, without sacrificing the advantages virtual teaching offers in terms of access over distance, asynchronicity, economies of scale, and cost?
posted by Aaron Saiger
This post is a nerd crowdsourcing request. As a guest blogger I don’t know my audience as well as I might, but I am heartened by the presence of “science fiction” among the options my hosts give me for categorizing my posts; and my teenager assures me that “nerd” is a compliment.
As several of my earlier posts suggest, I am interested in the impact of virtual technology upon K-12 schooling; and one thing I have been doing in my spare time is looking at literary accounts, highbrow and low, of what schooling in the future might look like. A colleague gave me Ernest Kline’s recent Ready Player One, which imagines school in a fully virtualized world that looks a lot like the school I went to, complete with hallways, bullies, and truant teachers – but the software allows the students to mute their fellows and censors student obscenity before it reaches the teachers’ interfaces. Another colleague reminded me of Asimov’s 1951 The Fun They Had, where the teacher is mechanical but the students still wiggly and apathetic. On the back of a public swapshelf, I found the Julian May 1987 Galactic Milieu series, which imagines brilliant children, all alone on faraway planets, logging on with singleminded seriousness to do their schoolwork all by their lonesomes. And my daughter gave me Orson Scott Card’s famous Ender’s Game, where the bullying is more educative than the mathematics, and scripted by the adults much more carefully.
That seems like an extensive list but really it’s not, and I was never a serious sci-fi person. If anyone is willing to post in the comments any striking literary accounts of schooling in the future, I’d be grateful.
posted by Aaron Saiger
A growing number of lawmakers across the country are taking steps to redefine public education, shifting the debate from the classroom to the pocketbook. Instead of simply financing a traditional system of neighborhood schools, legislators and some governors are headed toward funneling public money directly to families, who would be free to choose the kind of schooling they believe is best for their children, be it public, charter, private, religious, online or at home.
In particular, the Times is right that what is sought here is redefinition. Once states established and supported institutions – public schools – that parents could take or leave, so long as they educated their children somehow. The new paradigm has states instead provide a quantum of funding earmarked for each child, that parents can deploy at any educational institution of their choosing. The fact that the aid attaches to the child and follows her to her family’s chosen school is much more important than the various labels ascribed to the funding and/or the institutional provider – public, private, charter, voucher.
As people learn to function within, and get used to, this new paradigm, they will stop thinking of educational politics as the way to create good public schools, and start thinking of it in terms of how big the aid pie is and how it gets divided up. Whether a school is public or private, online or bricks-and-mortar, religious or not – these stop being political questions and start being questions that markets will resolve through supply and demand. Read the rest of this post »
posted by Ryan Calo
As Deven Desai points out yesterday, driverless cars could bring a variety of benefits. For instance: driverless cars may be much safer than human drivers. Human error accounts for an enormous percentage of driving fatalities, which number in the tens of thousands. In a “perfect,” post-driver world, the circle of fatalities caused by vehicles would simply shrink. The resulting diagram would look something like this:
posted by Deven Desai
Just as Neil Richards’s The Perils of Social Reading (101 Georgetown Law Journal 689 (2013)) is out in final form, Netflix released its new social sharing features in partnership with that privacy protector, Facebook. Not that working with Google, Apple, or Microsoft would be much better. There may be things I am missing. But I don’t see how turning on this feature is wise given that it seems to require you to remember not to share in ways that make sharing a bit leakier than you may want.
Apparently one has to connect your Netflix account to Facebook to get the feature to work. The way it works after that link is made poses problems.
According to SlashGear two rows appear. One is called Friends’ Favorites tells you just that. Now, consider that the algorithm works in part by you rating movies. So if you want to signal that odd documentaries, disturbing art movies, guilty pleasures (this one may range from The Hangover to Twilight), are of interest, you should rate them highly. If you turn this on, are all old ratings shared? And cool! Now everyone knows that you think March of the Penguins and Die Hard are 5 stars. The other button:
is called “Watched By Your Friends,” and it consists of movies and shows that your friends have recently watched. It provides a list of all your Facebook friends who are on Netflix, and you can cycle through individual friends to see what they recently watched. This is an unfiltered list, meaning that it shows all the movies and TV shows that your friends have agreed to share.
Of course, you can control what you share and what you don’t want to share, so if there’s a movie or TV show that you watch, but you don’t want to share it with your friends, you can simply click on the “Don’t Share This” button under each item. Netflix is rolling out the feature over the next couple of days, and the company says that all US members will have access to Netflix social by the end of the week.
Right. So imagine you forget that your viewing habits are broadcast. And what about Roku or other streaming devices? How does one ensure that the “Don’t Share” button is used before the word goes out that you watched one, two, or three movies on drugs, sex, gay culture, how great guns are, etc.?
As Richards puts it, “the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate
our intellectual privacy—the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition.” So too for video and really any information consumption.