Site Meter

Tagged: search engines

3

Stanford Law Review Online: Software Speech

Stanford Law Review

The Stanford Law Review Online has just published a Note by Andrew Tutt entitled Software Speech. Tutt argues that current approaches to determining when software or speech generated by software can be protected by the First Amendment are incorrect:

When is software speech for purposes of the First Amendment? This issue has taken on new life amid recent accusations that Google used its search rankings to harm its competitors. This spring, Eugene Volokh coauthored a white paper explaining why Google’s search results are fully protected speech that lies beyond the reach of the antitrust laws. The paper sparked a firestorm of controversy, and in a matter of weeks, dozens of scholars, lawyers, and technologists had joined the debate. The most interesting aspect of the positions on both sides—whether contending that Google search results are or are not speech—is how both get First Amendment doctrine only half right.

He concludes:

By stopping short of calling software “speech,” entirely and unequivocally, the Court would acknowledge the many ways in which software is still an evolving cultural phenomenon unlike others that have come before it. In discarding tests for whether software is speech on the basis of its literal resemblance either to storytelling (Brown) or information dissemination (Sorrell), the Court would strike a careful balance between the legitimate need to regulate software, on the one hand, and the need to protect ideas and viewpoints from manipulation and suppression, on the other.

Read the full article, Software Speech at the Stanford Law Review Online.

6

Better Stories, Better Laws, Better Culture

I first happened across Julie Cohen’s work around two years ago, when I started researching privacy concerns related to Amazon.com’s e-reading device, Kindle.  Law professor Jessica Littman and free software doyen Richard Stallman had both talked about a “right to read,” but never was this concept placed on so sure a legal footing as it was in Cohen’s essay from 1996, “A Right to Read Anonymously.”  Her piece helped me to understand the illiberal tendencies of Kindle and other leading commercial e-readers, which are (and I’m pleased more people are coming to understand this) data gatherers as much as they are appliances for delivering and consuming texts of various kinds.

Truth be told, while my engagement with Cohen’s “Right to Read Anonymously” essay proved productive for this particular project, it also provoked a broader philosophical crisis in my work.  The move into rights discourse was a major departure — a ticket, if you will, into the world of liberal political and legal theory.  Many there welcomed me with open arms, despite the awkwardness with which I shouldered an unfamiliar brand of baggage trademarked under the name, “Possessive Individualism.”  One good soul did manage to ask about the implications of my venturing forth into a notion of selfhood vested in the concept of private property.  I couldn’t muster much of an answer beyond suggesting, sheepishly, that it was something I needed to work through.

It’s difficult and even problematic to divine back-story based on a single text.  Still, having read Cohen’s latest, Configuring the Networked Self, I suspect that she may have undergone a crisis not unlike my own.  The sixteen years spanning “A Right to Read Anonymously” and Configuring the Networked Self are enormous.  I mean that less in terms of the time frame (during which Cohen was highly productive, let’s be clear) than in terms of the refinement in the thinking.  Between 1996 and 2012 you see the emergence of a confident, postliberal thinker.  This is someone who, confronted with the complexities of everyday life in highly technologized societies, now sees possessive individualism for what it is: a reductive management strategy, one whose conception of society seems more appropriate to describing life on a preschool playground than it does to forms of interaction mediated by the likes of Facebook, Google, Twitter, Apple, and Amazon.

In this Configuring the Networked Self is an extraordinary work of synthesis, drawing together a diverse array of fields and literatures: legal studies in its many guises, especially its critical variants; science and technology studies; human and computer interaction; phenomenology; post-structuralist philosophy; anthropology; American studies; and surely more.  More to the point it’s an unusually generous example of scholarly work, given Cohen’s ability to see in and draw out of this material its very best contributions.

I’m tempted to characterize the book as a work of cultural studies given the central role the categories culture and everyday life play in the text, although I’m not sure Cohen would have chosen that identification herself.  I say this not only because of the book’s serious challenges to liberalism, but also because of the sophisticated way in which Cohen situates the cultural realm.

This is more than just a way of saying she takes culture seriously.  Many legal scholars have taken culture seriously, especially those interested in questions of privacy and intellectual property, which are two of Cohen’s foremost concerns.  What sets Configuring the Networked Self apart from the vast majority of culturally inflected legal scholarship is her unwillingness to take for granted the definition — you might even say, “being” — of the category, culture.  Consider this passage, for example, where she discusses Lawrence Lessig’s pathbreaking book Code and Other Laws of Cyberspace:

The four-part Code framework…cannot take us where we need to go.  An account of regulation emerging from the Newtonian interaction of code, law, market, and norms [i.e., culture] is far too simple regarding both instrumentalities and effects.  The architectures of control now coalescing around issues of copyright and security signal systemic realignments in the ordering of vast sectors of activity both inside and outside markets, in response to asserted needs that are both economic and societal.  (chap. 7, p. 24)

What Cohen is asking us to do here is to see culture not as a domain distinct from the legal, or the technological, or the economic, which is to say, something to be acted upon (regulated) by one or more of these adjacent spheres.  This liberal-instrumental (“Netwonian”) view may have been appropriate in an earlier historical moment, but not today.  Instead, she is urging us to see how these categories are increasingly embedded in one another and how, then, the boundaries separating the one from the other have grown increasingly diffuse and therefore difficult to manage.

The implications of this view are compelling, especially where law and culture are concerned.  The psychologist Abraham Maslow once said, “it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”  In the old, liberal view, one wielded the law in precisely this way — as a blunt instrument.  Cohen, for her part, still appreciates how the law’s “resolute pragmatism” offers an antidote to despair (chap. 1, p. 20), but her analysis of the “ordinary routines and rhythms of everyday practice” in an around networked culture leads her to a subtler conclusion (chap. 1, p. 21).  She writes: “practice does not need to wait for an official version of culture to lead the way….We need stories that remind people how meaning emerges from the uncontrolled and unexpected — stories that highlight the importance of cultural play and of spaces and contexts within which play occurs” (chap. 10, p. 1).

It’s not enough, then, to regulate with a delicate hand and then “punt to culture,” as one attorney memorably put it an anthropological study of the free software movement.  Instead, Cohen seems to be suggesting that we treat legal discourse itself as a form of storytelling, one akin to poetry, prose, or any number of other types of everyday cultural practice.  Important though they may be, law and jurisprudence are but one means for narrating a society, or for arriving at its self-understandings and range of acceptable behaviors.

Indeed, we’re only as good as the stories we tell ourselves.  This much Jaron Lanier, one of the participants in this week’s symposium, suggested in his recent book, You Are Not a Gadget.  There he showed how the metaphorics of desktops and filing, generative though they may be, have nonetheless limited the imaginativeness of computer interface design.  We deserve computers that are both functionally richer and experientially more robust, he insists, and to achieve that we need to start telling more sophisticated stories about the relationship of digital technologies and the human body.  Lousy stories, in short, make for lousy technologies.

Cohen arrives at an analogous conclusion.  Liberalism, generative though it may be, has nonetheless limited our ability to conceive of the relationships among law, culture, technology, and markets.  They are all in one another and of one another.  And until we can figure out how to narrate that complexity, we’ll be at a loss to know how to live ethically, or at the very least mindfully, in an a densely interconnected and information rich world.  Lousy stories make for lousy laws and ultimately, then, for lousy understandings of culture.

The purposes of Configuring the Networked Self are many, no doubt.  For those of us working in the twilight zone of law, culture, and technology, it is a touchstone for how to navigate postliberal life with greater grasp — intellectually, experientially, and argumentatively.  It is, in other words, an important first chapter in a better story about ordinary life in a high-tech world.

1

Stanford Law Review Online: Don’t Break the Internet

Stanford Law Review

The Stanford Law Review Online has just published a piece by Mark Lemley, David S. Levine, and David G. Post on the PROTECT IP Act and the Stop Online Piracy Act. In Don’t Break the Internet, they argue that the two bills — intended to counter online copyright and trademark infringement — “share an underlying approach and an enforcement philosophy that pose grave constitutional problems and that could have potentially disastrous consequences for the stability and security of the Internet’s addressing system, for the principle of interconnectivity that has helped drive the Internet’s extraordinary growth, and for free expression.”

They write:

These bills, and the enforcement philosophy that underlies them, represent a dramatic retreat from this country’s tradition of leadership in supporting the free exchange of information and ideas on the Internet. At a time when many foreign governments have dramatically stepped up their efforts to censor Internet communications, these bills would incorporate into U.S. law a principle more closely associated with those repressive regimes: a right to insist on the removal of content from the global Internet, regardless of where it may have originated or be located, in service of the exigencies of domestic law.

Read the full article, Don’t Break the Internet by Mark Lemley, David S. Levine, and David G. Post, at the Stanford Law Review Online.

Note: Corrected typo in first paragraph.

0

If Cows Could Read

In my forthcoming article, Copyright and Copy-Reliant Technology, I investigate the significance of transaction costs in the context of technologies that copy expressive works for nonexpressive ends. These “copy-reliant technologies”, such as Internet search engines and plagiarism detection software do not read, understand, or enjoy copyrighted works, nor do they deliver these works directly to the public. They do, however, necessarily copy them in order to process them as grist for the mill, raw materials that feed various algorithms and indices.

Copy-reliant technologies usually, but not invariably, incorporate some kind of technologically enabled opt-out mechanism to maintain their preferred default rule of open access. For example, every major Internet search engine relies on the Robots Exclusion Protocol to prevent their automated agents from indexing certain content and to remove previously indexed material from their databases as required.  A robots.txt file at the root level of a website in the form of: User–Agent:* Disallow: / will banish all compliant search engine robots from a website.

The Robots Exclusion Protocol is pretty easy to implement and it is highly customizable. The interesting question for copyright law is “does the provision of an opt-out make any difference?”

In the Article, I argue that it opt-outs are significant in the context of a fair use analysis. The doctrinal analysis is in the paper, but the basic point is that when transaction costs are otherwise high, opt-out mechanisms can play a critical role in preserving a default rule of open access while still allowing individuals to have their preferences respected.

The notion that the rights of the property owner can be protected under permissive default rules coupled with an opt-out is hardly new.  Robert Ellickson famously describes the “fencing out” rule whereby cattle were allowed to roam freely on the property of others unless that property was fenced.  Landowners still maintained their property rights, subject to the burden of fencing out neighbors’ cattle.  Presumably, if cows could read, a sign not unlike the Robots Exclusion Protocol would have been sufficient.

7

Could Yahoo! + Bing = Death to Google?

informationsign2Yahoo! continues to be in the news as company that has lost its way. After failed merger problems, Yahoo has now sold its search business to the formerly evil and now oddly white knight(ish) Microsoft. It seems that Yahoo! and MS are now in a deal where MS’s Bing will power (and have some brand palcement) Yahoo!’s search. Others can go into the drop from about $46 billion to $4 or 5 billion sale price and other Yahoo! acts that make one wonder what the company is doing. For now, I want to remind folks about a little relationship called Yahoo! search powered by, wait for it, Google. Yes, Google. I wonder whether the G would be where it is today if Yahoo! had not given it that key placement. As one article pointed out

In a unique twist, Yahoo didn’t simply renew the deal for Google to be its “backup” partner, used only when Yahoo itself doesn’t have an answer. Instead, the company has embraced Google’s results even more tightly. Unveiled to the general public today is a new Yahoo search results page, where there is no longer a separation between Yahoo’s own human-powered listings and Google’s crawler-based results. Instead, the two are blended together.

Read the whole article for some fascinating perspectives on Yahoo! versus Google when Y was the big player. To be fair, Yahoo! appears to have had small chances to buy Google (but one might also say that after being apparently turned down for help by Yahoo!, the Google folks knew that they should not sell even at $3 billion). I for one don’t think I can say that Yahoo! should have known that Google was going to pop its IPO the way it did. For that matter had then CEO Terry Semmel bought Google, he would have had to take it public to show that it was worth the money. As Wired notes “Google’s revenue stood at a measly $240 million a year. Yahoo’s was about $837 million. And yet, with Yahoo’s stock price still hovering at a bubble-busted $7 a share, a $5 billion purchase price would essentially mean that Yahoo would have to spend its entire market value to swing the deal. It would be a merger of equals, not a purchase.”

So now we have the Yahoo! MS deal. It could be that Yahoo! is again running up the white flag about its ability to be a real technology/engineering company (“But now we have empirical evidence: At Yahoo, the marketers rule, and at Google the engineers rule. And for that, Yahoo is finally paying the price.”). But it may also be a way that MS will be able to grab Yahoo!’s customers, compete on search, and show that it still has the chops to beat back Google’s relentless drive to be all things to everyone. If so, maybe the two companies will balance each other out for a bit. Either way, it seems that as the NY Times pointed out, Yahoo! has exited the search game because as its CEO admits it cannot play in it at the level that MS and Google can (billions of dollars). Whether Yahoo! can find a new way to be relevant is another issue. The Times article describes Yahoo!’s severe dysfunction and what to me reads like classic Internet company arrogance. That being said, maybe Yahoo! is picking its best fight and with a little MS mixed in, Google will have to stay honest too. Or maybe this move is Yahoo!’s way of taking on Google while Yahoo! heads out of our world.

2

Wolfram Blah

News reports were tantalizing — the new search engine could be a Google killer. The Times read:

A revolutionary new search engine that computes answers rather than pointing to websites will be launched officially today amid heated talk that it could challenge the might of Google.

Other news outlets, like the LA Times, were also gushing.

Does Wolfram Alpha live up to the hype? Read More