posted by Lea Shaver
It’s become a truism in IP scholarship to introduce a discussion by acknowledging the remarkable recent rise in popular, scholarly, and political interest in our field. Thus readers will recognize a familiar sentiment in the opening line of Amy Kapczynski and Gaëlle Krikorian’s new book:
A decade or two ago, the words “intellectual property” were rarely heard in polite company, much less in street demonstrations or on college campuses. Today, this once technical concept has become a conceptual battlefield.
Only recently, however, has it become possible to put this anecdotal consensus to empirical test.
In December 2010, Google launched ngrams, a simple tool for searching its vast repository of digitized books and charting the frequency of specific terms over time. (It controls for the fact that there are many more books being published today.)
If you haven’t already played around with this tool to explore your own topics of interest, you should. While you’re at it, take a stab at explaining why writing on the Supreme Court rose steadily until approximately 1935 and has dropped just as steadily ever since!
Back to our topic, though. What does this data reveal about the prominence of intellectual property in published discourse?
I generated two graphs, both charting the terms “intellectual property,” “copyright,” “patent,” and “trademark.” First, the longview:
Read the rest of this post »
February 3, 2011 at 2:25 pm Tags: access to knowledge, commons, fair use, Google, Intellectual Property, ngram, open access, public domain Posted in: Symposium (Access to Knowledge) Print This Post 3 Comments
posted by Matthew Sag
In my forthcoming article, Copyright and Copy-Reliant Technology, I investigate the significance of transaction costs in the context of technologies that copy expressive works for nonexpressive ends. These “copy-reliant technologies”, such as Internet search engines and plagiarism detection software do not read, understand, or enjoy copyrighted works, nor do they deliver these works directly to the public. They do, however, necessarily copy them in order to process them as grist for the mill, raw materials that feed various algorithms and indices.
Copy-reliant technologies usually, but not invariably, incorporate some kind of technologically enabled opt-out mechanism to maintain their preferred default rule of open access. For example, every major Internet search engine relies on the Robots Exclusion Protocol to prevent their automated agents from indexing certain content and to remove previously indexed material from their databases as required. A robots.txt file at the root level of a website in the form of: User–Agent:* Disallow: / will banish all compliant search engine robots from a website.
The Robots Exclusion Protocol is pretty easy to implement and it is highly customizable. The interesting question for copyright law is “does the provision of an opt-out make any difference?”
In the Article, I argue that it opt-outs are significant in the context of a fair use analysis. The doctrinal analysis is in the paper, but the basic point is that when transaction costs are otherwise high, opt-out mechanisms can play a critical role in preserving a default rule of open access while still allowing individuals to have their preferences respected.
The notion that the rights of the property owner can be protected under permissive default rules coupled with an opt-out is hardly new. Robert Ellickson famously describes the “fencing out” rule whereby cattle were allowed to roam freely on the property of others unless that property was fenced. Landowners still maintained their property rights, subject to the burden of fencing out neighbors’ cattle. Presumably, if cows could read, a sign not unlike the Robots Exclusion Protocol would have been sufficient.
posted by Matthew Sag
The now defunct version of the Google Book Class Action Settlement is a complicated document consisting of 141 pages, 160 definitions, 17 separate articles and 116 separate clauses, not including the substantial provisions contained within the 15 attachments where several important features of the deal were buried.
The initial draft of the agreement dates back to October 28, 2008, when Google announced that it had reached a settlement of the highly publicized class-action lawsuit brought by the Authors Guild and another equally important lawsuit brought by the American Association of Publishers.
Opposition from various quarters caused the parties to reconsider the details of the settlement and a new version is due on Monday November 9, 2009. In my recent article I compared the settlement to the most likely outcome of the litigation the settlement resolves. In this post I speculate about the contents of the revised agreement.
The essential features of the old settlement agreement were:
- Money. Google made some pretty significant financial concessions, including one-time payments of over $100 million dollars and a revenue sharing agreement.
- Digitization, Indexing & Search. In return for these concessions Google received the right to continue to operate its book search engine, substantially in its current form which is arguably consistent with copyright law’s fair use doctrine.
- Commodification. The settlement also gave Google the ability to explore new revenue possibilities in cooperation with authors and publishers. The highlights consisted of extensive book previews, consumer e-book purchases, institutional subscriptions to the entire Google Book database and various other “New Revenue Opportunities”.
- New institutional arrangements. Beyond the mechanics of the agreement itself, the key elements of the new Google Book universe were to be the “Book Rights Registry” and the “Author-Publisher Procedures”. Although the Registry received more attention from critics of the settlement, the Author-Publisher Procedures appeared to be the key vulnerability from a class-action fairness perspective. These procedures determine who controls the exploitation of a work within the Google Book universe and who benefits from that exploitation. In many cases the Author-Publisher Procedures act like a standard form publishing contract that supersedes deals negotiated before the importance of digital rights was widely realized.
- Orphan works exploitation. The treatment of orphan works pervades all aspects of the current Settlement agreement. The agreement increased public access to orphan works by presumptively including almost all works in most commercially significant uses. Orphan works could be digitized, indexed, made available for partial-previews, sold as consumer purchases and incorporated into institutional subscriptions. As well as benefiting Google, revenues attributable to these uses will flow in part to the Registry, and to registered authors and publishers.
- Orphan works monopoly. In its current form the Settlement only solves the orphan works problem for Google.
What should we expect on Monday?
The most desirable change from an antitrust perspective would be to allow Google’s competitors to exploit orphan works on the same terms as Google. The problem with this solution is that it further strains the boundaries of class action law and looks more and more like private legislation. This should not, in my view, be enough to derail the deal if the parties can show that all of the relevant sub-class interests were adequately represented.
The Author-Publisher Procedures enhance the coordinating function of the Settlement by streamlining the incorporation of existing author-publisher contractual terms into the framework of the Google Book universe. However, where an existing author-publisher contract gives both parties some control over electronic exploitation, or simply fails to make any provision for electronic rights, the Author-Publisher Procedures effectively overwrite those contracts. These new terms do not appear to systematically disadvantage either authors or publishers, but they strike me as a one size fits all solution that could be substantially improved upon.
Finally, I expect the revenue sharing aspects of the deal to become more complicated.
posted by Matthew Sag
Should we fear Google? This question, unthinkable ten, maybe even five, years ago, seems to dominate internet policy discussion today. AT&T is afraid of Google Voice. Apple might be afraid of the Google Phone. Microsoft is afraid that Google Apps will make its Office suit redundant. These fears are justified, but they are also good. In most cases Googlephobia is a condition suffered by competitors. Google will probably kill off some competitors, but it will force many more to continue to innovate and provide better products to the consumer at lower prices. So, yes, some people should fear Google. But should we the public?
“Fear is often preceded by astonishment, and is so far akin to it, that both lead to the senses of sight and hearing being instantly aroused. In both cases the eyes and mouth are widely opened, and the eyebrows raised.” Charles Darwin, The Expression of the Emotions in Man and Animals.
In its pre-settlement incarnation, the Google Book Search (GBS) project was merely an astonishing attempt to build a comprehensive search engine to allow full text searching inside millions of books. The GBS envisaged in the Settlement (before the DOJ sent the parties back to the drawing-board) was much more ambitious. Not satisfied with digitization, indexing and limited display of books consistent with copyright law’s fair use doctrine, Google, the Authors Guild and a handful of publishers struck a deal which allowed for the commoditization of digital books as direct substitutes for paper copies. Subject to an opt-out and a few other exclusions, the Settlement swept in almost all books subject to U.S. copyrights and established an entirely new institutional framework for clearing digital book rights.
My personal view is that justified astonishment at the GBS Settlement has, in too many cases, given way to unjustified fear. Google is still far from being the new Microsoft as the Department of Justice’s Christine Varney has asserted. It certainly does not act like it. Google’s track record of openness and innovation are heartening and there is very little evidence so far that they plan on abandoning their “don’t be evil” corporate culture.
Googlephobia appears to be the foundation of some pretty wild assertions in the context of the Google Book dispute in particular. Google conceives that it is set to liberate out-of-print books from their dusty dungeons on the relatively inaccessible shelves of the worlds great libraries. Critics of the deal (and the initial more modest GBS) see plans for monopolization of hitherto non-existent markets, the destruction of libraries, universities and even the book itself.
The Google Book Settlement was not perfect, but my own fear is that Googlephobia and the intervention of the Department of Justice have left us worse off than we would have otherwise been. The Google skeptics are right about a number of the Settlement’s shortcomings, but now that the parties renegotiating the deal we had all better hope that GBS version 3 is better, fairer, and more accessible — not just smaller and less ambitious.
It might be naive to simply trust in Google, but the fear Google now inspires seems equally misplaced.
posted by Kaimipono D. Wenger
Howto: Fight anorexia and associated body image disorders, plus combat DMCA abuse — all in one handy blog post. (In which Cory Doctorow eviscerates the weak C&D letter asking BoingBoing’s ISP to remove a bizarrely photoshopped image of a mutant anorexic model.)
Excellent multitasking, folks. In future DMCA smackdowns, Cory will cure cancer, save the rainforest, and abolish the designated hitter rule.
posted by Deven Desai
So I had my iTunes open and on shuffle yesterday when Monty Python’s “Finland” came on. That was what prompted me to check YouTube for Python offerings. Now the Python chaps have offered their own channel. This video has the usual Python cheek as they talk about YouTube, being ripped off, and the open plea that viewers buy the products after they enjoy them. The clip also touts the troop’s interest in showing the clips as they wanted them to be shown and in high quality.
Fun stuff but here is the problem. The Monty Python Channel has nowhere near the quantity of Python material one can find elsewhere on YouTube. I wonder whether the Python folks chose to leave the other posters alone and offer what they see as the best or most in demand clips in a branded area. Then again, they may have decided to go after the other posters too. And to think this train of thought all started in Finland. Finland? Yes, because I could take a CD, put into MP3 format, and listen to “Finland” as a shuffle tune. But wait. There’s more! The devil you say. No, really.
Check out the clip for Finland below. It is a good quality stream of the music. It is funny and adds a fair amount of creativity. It attributes the visual work and the software to make the work. It also acknowledges Python as the source of the music. In addition, it has embedded ads to allow a viewer to buy the song from iTunes or Amazon. Now given all the new works, Python’s failure to offer a similar video (even if they did the video is a new work albeit one needing the song to make much sense), AND the ads is it fair use? After all YouTube and the poster probably take a cut, as would the seller, but as the Python folks acknowledge they too are giving access to and enjoyment of their clips away for free with the plea that people buy their work. As my essay Individual Branding: How the Rise of Individual Creation and Distribution of Cultural Products Confuses the Intellectual Property System argues these facts present confusing situations for intellectual property. Sharing, attribution, some control, encouraging purchases, remixing, and more can all be seen in my encounter with Finland which may be my new personal metaphor for IP. Watch the video and tell me what you think, fair use, attribution, new work, infringement, all of the above?
posted by Deven Desai
The Chronicle of Higher Education (subscription required so no link) notes that Hollywood tends to ask universities and colleges for permission before they set their films or television shows at a particular campus. So Felicity attends University of New York instead of NYU, and Legally Blonde is set at Harvard instead of, wait for it … University of Chicago? Odd but apparently true (my guess is that this turn of events helped the film. No offense to Chicago but as a matter of pop culture Harvard probably takes the prize). One possible culprit according to the article is our friend US News and World Report and the ranking game. Since the report started ranking undergraduate institutions films reference real schools, rather than random State U, 29 percent of the time as opposed to 19 percent before the US News games began. The claim is that references might seem to be endorsements. So Stanford only allows “aspirational” portrayals; read here goody-goody overachievers. The article claims that Stealing Harvard was originally Stealing Stanford, but the farm rejected that idea “Since Stanford is need blind” and the story of needing to steal to go to the school would be unreal (as many fictional stories are). In contrast, Harvard seems to realize that a fictional story is just that and seems more generous about the names and so on. Note that most schools are more restrictive about shooting on campus but may embrace the idea for the fees they can charge.
All well and good, but whether there really is a trademark claim as the article suggests and the schools seem to think (note that Dawson’s Creek also wished to avoid conflict and invented Worthington University as a generic Ivy although ironically shot at Duke) is troubling. The expansive notion of association seems to fuel this perspective. But as Sandy Rierson and I argue in the Confronting the Genericisim Conundrum uses such as these are expressive and in that sense irrelevant to the market transaction trademark is supposed to be about. On a similar wavelength Mark Lemley and Mark McKenna seem to be arguing that other uses of trademarks are not relevant to trademark analysis (To be clear, I have yet to read the paper, and it may be that this sort of use would be actionable according to Mark and Mark (or dare I say it? Dare. Dare. Mark y Mark?).
In short, if one considers the feedback loop in play here, the more expressive uses that are made, the less likely people will think that Standford endorsed a portrayal. In addition, what about more critical commentary that could be set a university? Setting up a system of permissions is dangerous. Last, maybe Harvard has it correct: people are not that stupid. They can tell the difference between a fictional story and a claim to reality. Can’t they?
Creative Commons Attribution 2.0 License