Site Meter

Category: Google & Search Engines

Behind the Filter Bubble: Hidden Maps of the Internet

A small corner of the world of search took another step toward personalization today, as Bing moved to give users the option to personalize their results by drawing on data from their Facebook friends:

Research tells us that 90% of people seek advice from family and friends as part of the decision making process. This “Friend Effect” is apparent in most of our decisions and often outweighs other facts because people feel more confident, smarter and safer with the wisdom of their trusted circle.

Today, Bing is bringing the collective IQ of the Web together with the opinions of the people you trust most, to bring the “Friend Effect” to search. Starting today, you can receive personalized search results based on the opinions of your friends by simply signing into Facebook. New features make it easier to see what your Facebook friends “like” across the Web, incorporate the collective know-how of the Web into your search results, and begin adding a more conversational aspect to your searches.

The announcement almost perfectly coincides with the release of Eli Pariser’s book The Filter Bubble, which argues that “as web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: We get trapped in a “filter bubble” and don’t get exposed to information that could challenge or broaden our worldview.” I have earlier worried about both excessive personalization and integration of layers of the web (such as social and search, or carrier and device). I think Microsoft may be reaching for one of very few strategies available to challenge Google’s dominance in search. But I also fear that this is one more example of the “filter bubble” Pariser worries about.
Read More

0

UCLA Law Review Vol. 58, Issue 4 (April 2011)

Volume 58, Issue 4 (April 2011)


Articles

Digital Exhaustion Aaron Perzanowski & Jason Schultz 889
Fixing Inconsistent Paternalism Under Federal Employment Discrimination Law Craig Robert Senn 947
Awakening the Press Clause Sonja R. West 1025


Comments

Still Fair After All These Years? How Claim Preclusion and Issue Preclusion Should Be Modified in Cases of Copyright’s Fair Use Doctrine Karen L. Jones 1071
Patenting Everything Under the Sun: Invoking the First Amendment to Limit the Use of Gene Patents Krysta Kauble 1123


0

Technology Musings

Recently the New York Times carried a front page story about an eighth grade girl who foolishly took a nude picture of herself with her cell phone and sent it to a fickle boy – sexting. The couple broke up but her picture circulated among her schools mates with a text message “Ho Alert” added by a frenemy.  In less than 24 hours, “hundreds, possibly thousands, of students had received her photo and forwarded it. In short order, students would be handcuffed and humiliated, parents mortified and lessons learned at a harsh cost.”  The three students who set off the “viral outbreak” were charged with disseminating child pornography, a Class C felony.

The story struck a nerve, not only with the affected community, but with the Times’ readers as well.  Stories about the misuse and dangers of technology provide us with opportunities to educate our students, and us. In a Washington State sexting incident, for example, the teen charged had to prepared a public service statement warning other teens about sexting to avoid harsher criminal penalties.  But the teen’s nude photo is still floating around.  Information has permanence on the internet.

Few of us appreciate how readily obtainable our personal information is on the internet.   Read More

Vaidhyanathan’s Googlization: A Must-Read on Where “Knowing” is Going

Google’s been in the news a lot the past month. Concerned about the quality of their search results, they’re imposing new penalties on “content farms” and certain firms, including JC Penney and Overstock.com. Accusations are flying fast and furious; the “antichrist of Silicon Valley” has flatly told the Googlers to “stop cheating.”

As the debate heats up and accelerates in internet time, it’s a pleasure to turn to Siva Vaidhyanathan’s The Googlization of Everything, a carefully considered take on the company composed over the past five years. After this week is over, no one is going to really care whether Google properly punished JC Penney for scheming its way to the top non-paid search slot for “grommet top curtains.” But our culture will be influenced in ways large and small by Google’s years of dominance, whatever happens in coming years. I don’t have time to write a full review now, but I do want to highlight some key concepts in Googlization, since they will have lasting relevance for studies of technology, law, and media for years to come.

Cryptopicon

Dan Solove helped shift the privacy conversation from “Orwell to Kafka” in a number of works over the past decade. Other scholars of surveillance have first used, and then criticized, the concept of the “Panopticon” as a master metaphor for the conformity-inducing pressures of ubiquitous monitoring. Vaidhyanathan observes that monitoring is now so ubiquitous, most people have given up trying to conform. As he observes,

[T]he forces at work in Europe, North America, and much of the rest of the world are the opposite of a Panopticon: they involve not the subjection of the individual to the gaze of a single, centralized authority, but the surveillance of the individual, potentially by all, always by many. We have a “cryptopticon” (for lack of a better word). Unlike Bentham’s prisoners, we don’t know all the ways in which we are being watched or profiled—we simply know that we are. And we don’t regulate our behavior under the gaze of surveillance: instead, we don’t seem to care.

Of course, that final “we” is a bit overinclusive, for as Vaidhyanathan later shows in a wonderful section on the diverging cultural responses to Google Street View, there are bastions of resistance to the technology:
Read More

Search Neutrality as Disclosure and Auditing

Search neutrality is on the rise in Europe, and on the ropes in the US (or at least should be, according to James Grimmelmann). We barely have net neutrality here, and the tech press bridles at the thought of a sclerotic DC agency regulating god-like Googlers. I want to question its conventional wisdom, by proving how modest the “search neutrality” agenda is now, and how well it fits with classic ideals of neutrality in law.

There are many reasons to think that Google will continue to dominate the general purpose search field. Sure, searchers and advertisers can access a vibrant field of also-rans. But most users will always want a shot at Google for serious searching and advertising, just as a mobile internet connection is no substitute for a high bandwidth one for many important purposes.

Given these parallels, I’ve compared principles of broadband non-discrimination and search non-discrimination. But virtually every time the term “search neutrality” comes up in conversation, people tend to want to end the argument by saying “there is no one best way to order search results—editorial discretion is built into the process of ranking sites.” (See, for example, Clay Shirky’s response to my position in this documentary.) To critics, a neutral search engine would have to perform the (impossible) task of ranking every site according to some Platonic ideal of merit.

But on my account of neutrality, a neutral search engine must merely avoid certain suspect behaviors, including:
Read More

5

The Ugly Persistence of Internet Celebrity

Many desperately try to garner online celebrity.  They host You Tube channels devoted to themselves. They share their thoughts in blog postings and on social network sites.  They post revealing pictures of themselves on Flickr.  To their dismay though, no one pays much attention.  But for others, the Internet spotlight finds them and mercilessly refuses to yield ground.  For instance, in 2007, a sports blogger obtained a picture of a high-school pole vaulter, Allison Stokke, at a track meet and posted it online.  Within days, her picture spread across the Internet, from message boards and sport sites to porn sites and social network profiles.  Impostors created fake profiles of Ms. Stokke on social network sites, and Ms. Stokke was inundated with emails from interested suitors and journalists.  At the time, Ms. Stokke told the Washington Post that the attention felt “demeaning” because the pictures dominated how others saw her rather than her pole-vaulting accomplishments.

Time’s passage has not helped Stokke shake her online notoriety.  Sites continuously updated their photo galleries with pictures of Stokkes taken at track meets.  Blogs boasted of finding pictures of Stokke at college with headings like “Your 2010 Allison Stokke Update,” “Allison Stokke’s Halloween Cowgirl Outfit Accentuates the Total Package,” and “Only Known Allison Stokke Cal Picture Found.”  Postings include obscene language.  For instance, a Google search of her name on a safety setting yields 129,000 results while one with no safety setting has 220,000 hits.  Encyclopedia Dramatica has a wiki devoted to her (though Wikipedia has faithfully taken down entries about Ms. Stokke).

Read More

2

The Aftermath of Wikileaks

The U.K.’s freedom of information commissioner, Christopher Graham, recently told The Guardian that the WikiLeaks disclosures irreversibly altered the relationship between the state and public.  As Graham sees it, the WikiLeaks incident makes clear that governments need to be more open and proactive, “publishing more stuff, because quite a lot of this is only exciting because we didn’t know it. . . WikiLeaks is part of the phenomenon of the online, empowered citizen . . . these are facts that aren’t going away.  Government and authorities need to wise up to that.”  If U.K. officials take Graham seriously (and I have no idea if they will), the public may see more of government.  Whether that more in fact provides insights to empower citizens or simply gives the appearance of transparency is up for grabs.

In the U.S., few officials have called for more transparency after the release of the embassy cables.  Instead, government officials have successfully pressured internet intermediaries to drop their support of WikiLeaks.  According to Wired, Senator Joe Lieberman, for instance, was instrumental in persuading Amazon.com to kick WikiLeaks off its web hosting service.  Senator Lieberman has suggested that Amazon, as well as Visa and and PayPal, came to their own decisions about WikiLeaks. Lieberman noted:

“While corporate entities make decisions based on their obligations to their shareholders, sometimes full consideration of those obligations requires them to act as responsible citizens.  We offer our admiration and support to those companies exhibiting courage and patriotism as they face down intimidation from hackers sympathetic to WikiLeaks’ philosophy of irresponsible information dumps for the sake of damaging global relationships.”

Unlike the purely voluntary decisions that Internet intermediaries make with regard to cyber hate, see here, Amazon’s response raises serious concerns about what Seth Kreimer has called “censorship by proxy.”  Kreimer’s work (as well as Derek Bambauer‘s terrific Cybersieves) explores American government’s pressure on intermediaries to “monitor or interdict otherwise unreachable Internet communications” to aid the “War on Terror.”

Legislators have also sought to ensure opacity of certain governmental information with new regulations.  Proposed legislation (spearheaded by Senator Lieberman) would make it a federal crime for anyone to publish the name of U.S. intelligence source.  The Securing Human Intelligence and Enforcing Lawful Dissemination (SHIELD) Act would amend a section of the Espionage Act that forbids the publication of classified information on U.S. cryptographic secrets or overseas communications intelligence.  The SHIELD Act would extend that prohibition to information on human intelligence, criminalizing the publication of information “concerning the identity of a classified source or information of an element of the intelligence community of the United States” or “concerning the human intelligence activities of the United States or any foreign government” if such publication is prejudicial to U.S. interests.

Another issue on the horizon may be the immunity afforded providers or users of interactive computer services who publish content created by others under section 230 of the Communications Decency Act.  An aside: section 230 is not inconsistent with the proposed SHIELD Act as it excludes federal criminal claims from its protections.  (This would not mean that website operators like Julian Assange would be strictly liable for others’ criminal acts on its services; the question would be whether a website operator’s actions violated the SHIELD Act).   Now for my main point: Senator Lieberman has expressed an interest in broadening the exemptions to section 230′s immunity to require the removal of certain content, such as videos featuring Islamic extremists.  Given his interest and the current concerns about security risks related to online disclosures, Senator Lieberman may find this an auspicious time to revisit section 230′s broad immunity.

0

Advancing the Fight Against Cyber Hate with Greater Transparency and Clarity about Hate Speech Policies

Today, online intermediaries voluntarily seek to combat digital hatred, often addressing hate speech in their Terms of Service Agreements or Community Guidelines.  Those agreements and guidelines tend to include vague prohibitions of hate speech.  The terms of service for Yahoo!, for instance, requires users of some services to refrain from generating “hateful or racially, ethnically or otherwise objectionable” content without saying more.  Intermediaries can advance the fight against digital hate with more transparency and clarity about the terms of, and harms to be prevented by, their hate speech policies, as well as the consequences of policy violations.  With more transparency and clarity, intermediaries can make behavioral expectations more understandable and users can more fully appreciate the significance of digital citizenship, see here, here, here, and here.  The more intermediaries and users understand why a particular policy prohibits a certain universe of speech, the more likely they can then put into practice, and adhere to, that policy in a way that achieves those objectives.

Before seeking to provide guidance on how intermediaries might do that, it is important to recognize that efforts to define hate speech raise at least two significant challenges.  First, many disagree over which, if any, of the harmful effects potentially generated by such speech are sufficiently serious to warrant action.  Second, controversy also remains about the universe of speech that is actually likely to trigger harms deemed important enough to avoid.  So, for example, even if an intermediary defines hate speech as that which tends to incite violence against targeted groups, how do we determine which speech has the propensity to do that?  Much of this lies in identifying the factors relevant to making such causal predictions.  In Intermediaries and Hate Speech: Fostering Digital Citizenship for the Information Age (forthcoming BU Law Review 2011), Helen Norton and I don’t pretend that that we can make hard choices easy and recognize that intermediaries’ choices among various options may turn on a variety of issues: their assessment of the relative costs of hate speech and its constraint; empirical predictions about what sort of speech is indeed likely to lead to what sorts of harms; the breadth of their business interests, available resources, and the like; and their sense of corporate social responsibility to foster digital citizenship.  Intermediaries’ choices on how to define hate speech and the harms that they seek to avoid — however difficult — can and should be made in a more principled and transparent way. Read More

Can Suspicious Activity Reports Trigger Health Data Gathering?

In an article entitled “Monitoring America,” Dana Priest and William Arkin describe an extraordinary pattern of governmental surveillance. To be sure, in the wake of the attacks of 9/11, there are important reasons to increase the government’s ability to understand threats to order. However, the persistence, replicability, and searchability of the databases now being compiled for intelligence purposes raise very difficult questions about the use and abuse of profiles, particularly in cases where health data informs the classification of individuals as threats.
Read More

A Peace Treaty for the Google Wars?

As Google grows, so do fears about its possible overreach. A Wall Street Journal article quotes several companies worried that Google will use its dominance in search to invade their turf:

Google Inc. increasingly is promoting some of its own content over that of rival websites when users perform an online search, prompting competing sites to cry foul. The Internet giant is displaying links to its own services—such as local-business information or its Google Health service—above the links to other, non-Google content found by its search engine. . . .

TripAdvisor LLC Chief Executive Stephen Kaufer said the traffic his site gets from Google’s search engine dropped by more than 10%, on a seasonally adjusted basis, since mid-October—just before Google announced the latest change to the way its search engine shows information about local businesses. TripAdvisor.com, whose top source of traffic is Google, reviews hotels and other businesses frequented by travelers. . . .Google’s promotion of its own content over others’ has been one of many issues raised during the federal antitrust review of the company’s acquisition of ITA Software Inc., people involved in the discussions have said.

European antitrust authorities are also concerned. Jia Lynn Yang of the WaPo explains, “As the tech giant spreads its reach, it is making new enemies who fear that once Google steps onto their turf it will use its almighty search engine to quash them.” Anyone other than the top result may fear that Google has “hard coded bias” against them, in Ben Edelman’s memorable phrase.

This is a hard problem because a) Google’s ranking methods are secret, and b) Google’s results have been protected as speech by some courts. Therefore, even if a site wanted to sue Google on some kind of business tort theory, they might never get to discovery because the company could successfully characterize its rankings as a mere “opinion” of sites’ relevance.

But let’s just say that a disgruntled Google rival seeks not to change Google’s rankings, but to find out how they are generated. They are likely to run into the brick wall of trade secrecy—unless they can claim that the rankings violate some federal policy, like bans on stealth marketing. But even then, the challenger is going to run into real problems trying to understand exactly how Google ranks sites. What then?
Read More