Category: Google & Search Engines

5

Bullet, So Not Dodged

The question that I had been dreading came at last: “Mom, can I have a Facebook page?”  My daughter provided a strong defense: she’s 13, so she meets Facebook’s Terms of Service age requirement; she’s nearly an adult in her religion’s eyes (her bat mitzvah is in a week); past practices proves she’s responsible; and well, she feels ready.  (And I just discovered, she’s done her homework: see this Yahoo Answers! “My mom won’t let me get a Facebook page, how do I convince her?” thread that I found on my computer).

Next came the conversation.  We talked about how increasingly social media activity is part of one’s life’s biography.  Anything said and done in social network spaces becomes part of who you are in our Information Age.  Colleges may ask for your Facebook password.  Over 70% of employers look at social media data for interviewing and hiring (and sad to say, the outcomes are grim for applicants who over 60% of the time don’t get the interview or job due to social network profiles).  It’s not just what you post that speaks volumes — your social network (friends and their friends) tells some of your story for you.  There goes any control that you thought you had.  FB users often wrestle with whether they should de-friend those whose online personas don’t match their sensibilities (or the way in which they want others to perceive them).  This means that users need to keep a careful eye on their friends’ profiles (as well as ever-changing privacy settings).

That’s a lot of responsibility.  Or, as Bill Keller of the New York Times put it when he allowed his 13-year old daughter to join Facebook, he felt “a little as if I had passed my child a pipe of crystal meth.”  Beyond the potential privacy and reputational concerns that accompany social media use, an online life has other potential perils, like overuse (and thus inattention to studies, face-to-face family time, etc.) that cyber-pessimists underscore (see Nicholas Carr’s The Shallows).  And bullying, serious harassment, bigotry increasingly appear in mainstream social media in ways that kids can’t necessarily avoid (my work explores those problems, see here, here, and here, as well as terrific work by guest bloggers Ari Waldman and Mary Anne Franks).  Of course, there’s also lots of positive stuff emerging from these networked spaces.  Social media outlets like Facebook allow us to enact our personalities.  They let us express ourselves in ever-changing and expanding ways.  FB and other outlets host civic engagement as Helen Norton and I have emphasized.

I wonder, too, if my kid has a meaningful choice.  Can digital natives really stay away from social media if all of their friends socialize there?  And will employers and colleges expect that applicants partake in these activities because everyone else does?  Someday, will resisting having a Facebook profile express something negative about you?  Will it signal that you’re not socially adjusted or successful?  As Scott Peppet underscores in his work, we may be forced to give up our privacy to show that we are indeed healthy, social, smart, and the like.  That’s a lot to process, right?  I’m going to chew on this a while.  Your thoughts are most welcome!

5

Cyberharassment’s Waterloo

I begin my Co-Op blogging stint with deep appreciation for Danielle Citron’s invitation and for the entire Co-Op community’s indulgence. I am honored to be a small part of a wonderful online community that brings out the best in us and, for that matter, Web 2.0. My name is Ari, I am a Legal Scholar Teaching Fellow (just like a VAP) at California Western School of Law and I am a student of the interplay among the First Amendment, the Internet and other modern technologies and their effects on minority populations, like gays and lesbians. I go on the professor job market this Fall. I have a weekly blog (every Wednesday) over at the country’s most popular gay news site, Towleroad, for those interested in perspectives on LGBT legal issues for a mass audience. I also have a healthy relationship with physical fitness and an unhealthy relationship with the store Jack Spade. If there’s counseling for the latter, I’d appreciate a reference. Kidding…

For my month of blogging, I hope to engage with you in a few conversations, mostly about cyberharassment and the First Amendment, and hopefully with a healthy dose of humor.

My current project is the third in a series of projects about cyberharassment. The previous articles, available here, address the effects of cyberharassment on LGBT youth, argue for the use of affirmative “soft power” rather than after-the-fact criminalization to solve the problem and create a new analytical framework for adjudicating student free speech defenses to a school’s authority to punish cyberaggressors. Now I am considering the effect that cyberharassment, particularly harassment of a minority group, has on civic participation and the realization of democratic values. I argue that Internet intermediaries self-regulation of their sites and services to filter out hate, sexual harassment and other aggression conforms with long-standing First Amendment values.

Like President Obama likes to say, let me be clear. I do not mean to suggest that the First Amendment applies as a limit on the activities of private actors like Facebook or MySpace or Google; rather, I think that contrary to libertarian First Amendment scholars, we can expect these online intermediaries to regulate content and say that doing so reflects the democratic interests that underly the First Amendment.

Here’s the draft argument in brief that I am currently working out: The view of the Internet as an unencumbered and unfettered town square deserving the same Rawlsian liberal approach to free speech is wrong. Every online interaction is governed by intermediaries of varying kinds, all of which are the filters through which our online speech makes it through to our online communities. Traditional intermediaries have the power to regulate content consistent with the First Amendment, especially when not doing so would interfere with their and their users’ ability to participate in civil society. We see this more Aristotelian/communitarian approach to First Amendment values in intermediary jurisprudence — from publishers to book stores, and from schools to workplaces. And, like schools and workplaces, which can regulate their members’ speech in order to fulfill the institutions’ purposes, so too can online intermediaries like Facebook.

This project is in the early stages, and I always welcome comments/suggestions/evisceration of the argument. More to come…

I look forward to continuing this and other discussions with this splendid community.

Beyond Cyber-Utopianism

What encapsulates the ethos of Silicon Valley? Promoting his company’s prowess at personalization, Mark Zuckerberg once said that, “A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.” Scott Cleland argues that “you can’t trust Google, Inc.,” compiling a critical mass of dubious practices that might seem quite understandable each taken alone. Apple’s “reality distortion field” is the topic of numerous satires. As the internet increasingly converges through these three companies, what are the values driving their decisionmaking?

For some boosters, these are not terribly important questions: the logic of the net itself assures progress. But for Chris Lehmann, the highflying internet-academic-industrial complex has failed to think critically about a consolidating, commercialized cyberspace. Previously featured on this blog for his book, Lehmann’s review of Clay Shirky’s Cognitive Surplus is fairly scathing:

With the emergence of Web 2.0–style social media (things like Facebook, Twitter and text messaging), Shirky writes, we inhabit an unprecedented social reality, “a world where public and private media blend together, where professional and amateur production blur, and where voluntary public participation has moved from nonexistent to fundamental.” This Valhalla of voluntary intellectual labor represents a stupendous crowdsourcing, or pooling, of the planet’s mental resources, hence the idea of the “cognitive surplus.” . . .

[But why] assign any special value to an hour spent online in the first place? Given the proven models of revenue on the web, it’s reasonable to assume that a good chunk of those trillion-plus online hours are devoted to gambling and downloading porn. Yes, the networked web world does produce some appreciable social goods, such as the YouTubed “It Gets Better” appeals to bullied gay teens contemplating suicide. But there’s nothing innate in the character of digital communication that favors feats of compassion and creativity; for every “It Gets Better” video that goes viral, there’s an equally robust traffic in white nationalist, birther and jihadist content online. . . .

Read More

Behind the Filter Bubble: Hidden Maps of the Internet

A small corner of the world of search took another step toward personalization today, as Bing moved to give users the option to personalize their results by drawing on data from their Facebook friends:

Research tells us that 90% of people seek advice from family and friends as part of the decision making process. This “Friend Effect” is apparent in most of our decisions and often outweighs other facts because people feel more confident, smarter and safer with the wisdom of their trusted circle.

Today, Bing is bringing the collective IQ of the Web together with the opinions of the people you trust most, to bring the “Friend Effect” to search. Starting today, you can receive personalized search results based on the opinions of your friends by simply signing into Facebook. New features make it easier to see what your Facebook friends “like” across the Web, incorporate the collective know-how of the Web into your search results, and begin adding a more conversational aspect to your searches.

The announcement almost perfectly coincides with the release of Eli Pariser’s book The Filter Bubble, which argues that “as web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: We get trapped in a “filter bubble” and don’t get exposed to information that could challenge or broaden our worldview.” I have earlier worried about both excessive personalization and integration of layers of the web (such as social and search, or carrier and device). I think Microsoft may be reaching for one of very few strategies available to challenge Google’s dominance in search. But I also fear that this is one more example of the “filter bubble” Pariser worries about.
Read More

0

UCLA Law Review Vol. 58, Issue 4 (April 2011)

Volume 58, Issue 4 (April 2011)


Articles

Digital Exhaustion Aaron Perzanowski & Jason Schultz 889
Fixing Inconsistent Paternalism Under Federal Employment Discrimination Law Craig Robert Senn 947
Awakening the Press Clause Sonja R. West 1025


Comments

Still Fair After All These Years? How Claim Preclusion and Issue Preclusion Should Be Modified in Cases of Copyright’s Fair Use Doctrine Karen L. Jones 1071
Patenting Everything Under the Sun: Invoking the First Amendment to Limit the Use of Gene Patents Krysta Kauble 1123


0

Technology Musings

Recently the New York Times carried a front page story about an eighth grade girl who foolishly took a nude picture of herself with her cell phone and sent it to a fickle boy – sexting. The couple broke up but her picture circulated among her schools mates with a text message “Ho Alert” added by a frenemy.  In less than 24 hours, “hundreds, possibly thousands, of students had received her photo and forwarded it. In short order, students would be handcuffed and humiliated, parents mortified and lessons learned at a harsh cost.”  The three students who set off the “viral outbreak” were charged with disseminating child pornography, a Class C felony.

The story struck a nerve, not only with the affected community, but with the Times’ readers as well.  Stories about the misuse and dangers of technology provide us with opportunities to educate our students, and us. In a Washington State sexting incident, for example, the teen charged had to prepared a public service statement warning other teens about sexting to avoid harsher criminal penalties.  But the teen’s nude photo is still floating around.  Information has permanence on the internet.

Few of us appreciate how readily obtainable our personal information is on the internet.   Read More

Vaidhyanathan’s Googlization: A Must-Read on Where “Knowing” is Going

Google’s been in the news a lot the past month. Concerned about the quality of their search results, they’re imposing new penalties on “content farms” and certain firms, including JC Penney and Overstock.com. Accusations are flying fast and furious; the “antichrist of Silicon Valley” has flatly told the Googlers to “stop cheating.”

As the debate heats up and accelerates in internet time, it’s a pleasure to turn to Siva Vaidhyanathan’s The Googlization of Everything, a carefully considered take on the company composed over the past five years. After this week is over, no one is going to really care whether Google properly punished JC Penney for scheming its way to the top non-paid search slot for “grommet top curtains.” But our culture will be influenced in ways large and small by Google’s years of dominance, whatever happens in coming years. I don’t have time to write a full review now, but I do want to highlight some key concepts in Googlization, since they will have lasting relevance for studies of technology, law, and media for years to come.

Cryptopicon

Dan Solove helped shift the privacy conversation from “Orwell to Kafka” in a number of works over the past decade. Other scholars of surveillance have first used, and then criticized, the concept of the “Panopticon” as a master metaphor for the conformity-inducing pressures of ubiquitous monitoring. Vaidhyanathan observes that monitoring is now so ubiquitous, most people have given up trying to conform. As he observes,

[T]he forces at work in Europe, North America, and much of the rest of the world are the opposite of a Panopticon: they involve not the subjection of the individual to the gaze of a single, centralized authority, but the surveillance of the individual, potentially by all, always by many. We have a “cryptopticon” (for lack of a better word). Unlike Bentham’s prisoners, we don’t know all the ways in which we are being watched or profiled—we simply know that we are. And we don’t regulate our behavior under the gaze of surveillance: instead, we don’t seem to care.

Of course, that final “we” is a bit overinclusive, for as Vaidhyanathan later shows in a wonderful section on the diverging cultural responses to Google Street View, there are bastions of resistance to the technology:
Read More

Search Neutrality as Disclosure and Auditing

Search neutrality is on the rise in Europe, and on the ropes in the US (or at least should be, according to James Grimmelmann). We barely have net neutrality here, and the tech press bridles at the thought of a sclerotic DC agency regulating god-like Googlers. I want to question its conventional wisdom, by proving how modest the “search neutrality” agenda is now, and how well it fits with classic ideals of neutrality in law.

There are many reasons to think that Google will continue to dominate the general purpose search field. Sure, searchers and advertisers can access a vibrant field of also-rans. But most users will always want a shot at Google for serious searching and advertising, just as a mobile internet connection is no substitute for a high bandwidth one for many important purposes.

Given these parallels, I’ve compared principles of broadband non-discrimination and search non-discrimination. But virtually every time the term “search neutrality” comes up in conversation, people tend to want to end the argument by saying “there is no one best way to order search results—editorial discretion is built into the process of ranking sites.” (See, for example, Clay Shirky’s response to my position in this documentary.) To critics, a neutral search engine would have to perform the (impossible) task of ranking every site according to some Platonic ideal of merit.

But on my account of neutrality, a neutral search engine must merely avoid certain suspect behaviors, including:
Read More

5

The Ugly Persistence of Internet Celebrity

Many desperately try to garner online celebrity.  They host You Tube channels devoted to themselves. They share their thoughts in blog postings and on social network sites.  They post revealing pictures of themselves on Flickr.  To their dismay though, no one pays much attention.  But for others, the Internet spotlight finds them and mercilessly refuses to yield ground.  For instance, in 2007, a sports blogger obtained a picture of a high-school pole vaulter, Allison Stokke, at a track meet and posted it online.  Within days, her picture spread across the Internet, from message boards and sport sites to porn sites and social network profiles.  Impostors created fake profiles of Ms. Stokke on social network sites, and Ms. Stokke was inundated with emails from interested suitors and journalists.  At the time, Ms. Stokke told the Washington Post that the attention felt “demeaning” because the pictures dominated how others saw her rather than her pole-vaulting accomplishments.

Time’s passage has not helped Stokke shake her online notoriety.  Sites continuously updated their photo galleries with pictures of Stokkes taken at track meets.  Blogs boasted of finding pictures of Stokke at college with headings like “Your 2010 Allison Stokke Update,” “Allison Stokke’s Halloween Cowgirl Outfit Accentuates the Total Package,” and “Only Known Allison Stokke Cal Picture Found.”  Postings include obscene language.  For instance, a Google search of her name on a safety setting yields 129,000 results while one with no safety setting has 220,000 hits.  Encyclopedia Dramatica has a wiki devoted to her (though Wikipedia has faithfully taken down entries about Ms. Stokke).

Read More

2

The Aftermath of Wikileaks

The U.K.’s freedom of information commissioner, Christopher Graham, recently told The Guardian that the WikiLeaks disclosures irreversibly altered the relationship between the state and public.  As Graham sees it, the WikiLeaks incident makes clear that governments need to be more open and proactive, “publishing more stuff, because quite a lot of this is only exciting because we didn’t know it. . . WikiLeaks is part of the phenomenon of the online, empowered citizen . . . these are facts that aren’t going away.  Government and authorities need to wise up to that.”  If U.K. officials take Graham seriously (and I have no idea if they will), the public may see more of government.  Whether that more in fact provides insights to empower citizens or simply gives the appearance of transparency is up for grabs.

In the U.S., few officials have called for more transparency after the release of the embassy cables.  Instead, government officials have successfully pressured internet intermediaries to drop their support of WikiLeaks.  According to Wired, Senator Joe Lieberman, for instance, was instrumental in persuading Amazon.com to kick WikiLeaks off its web hosting service.  Senator Lieberman has suggested that Amazon, as well as Visa and and PayPal, came to their own decisions about WikiLeaks. Lieberman noted:

“While corporate entities make decisions based on their obligations to their shareholders, sometimes full consideration of those obligations requires them to act as responsible citizens.  We offer our admiration and support to those companies exhibiting courage and patriotism as they face down intimidation from hackers sympathetic to WikiLeaks’ philosophy of irresponsible information dumps for the sake of damaging global relationships.”

Unlike the purely voluntary decisions that Internet intermediaries make with regard to cyber hate, see here, Amazon’s response raises serious concerns about what Seth Kreimer has called “censorship by proxy.”  Kreimer’s work (as well as Derek Bambauer‘s terrific Cybersieves) explores American government’s pressure on intermediaries to “monitor or interdict otherwise unreachable Internet communications” to aid the “War on Terror.”

Legislators have also sought to ensure opacity of certain governmental information with new regulations.  Proposed legislation (spearheaded by Senator Lieberman) would make it a federal crime for anyone to publish the name of U.S. intelligence source.  The Securing Human Intelligence and Enforcing Lawful Dissemination (SHIELD) Act would amend a section of the Espionage Act that forbids the publication of classified information on U.S. cryptographic secrets or overseas communications intelligence.  The SHIELD Act would extend that prohibition to information on human intelligence, criminalizing the publication of information “concerning the identity of a classified source or information of an element of the intelligence community of the United States” or “concerning the human intelligence activities of the United States or any foreign government” if such publication is prejudicial to U.S. interests.

Another issue on the horizon may be the immunity afforded providers or users of interactive computer services who publish content created by others under section 230 of the Communications Decency Act.  An aside: section 230 is not inconsistent with the proposed SHIELD Act as it excludes federal criminal claims from its protections.  (This would not mean that website operators like Julian Assange would be strictly liable for others’ criminal acts on its services; the question would be whether a website operator’s actions violated the SHIELD Act).   Now for my main point: Senator Lieberman has expressed an interest in broadening the exemptions to section 230’s immunity to require the removal of certain content, such as videos featuring Islamic extremists.  Given his interest and the current concerns about security risks related to online disclosures, Senator Lieberman may find this an auspicious time to revisit section 230’s broad immunity.