Archive for the ‘Wiki’ Category
posted by Deven Desai
Dave Hoffman and Salil Mehra’s Wikitruth Through Wikiorder is fascinating paper on how Wikipedia and one type of commons works. When I saw this article “Wikipedia is editorial warzone, says study,” I thought that perhaps the legal academic work would cross over. The full paper, Dynamics of Conflicts in Wikipedia did not mention the Hoffman and Mehra paper. Maybe the sociological inquiry did not match. Maybe the other authors read the paper and did not see a way to cite it. For me, I wonder whether other fields draw across disciplines as much as law seems to do. In any event, the methods are different. The issues are related. So I perhaps both have a place for those interested in the way wikipedia manages it vast system.
posted by Deven Desai
Costs of education need to come down. Open course materials are growing. Maybe education will indeed undergo a transformation in the next ten years. There are many things that will need to change for true education reform to take place. But better resources matter. Enter Rice University. Its OpenStax College initiative tries to address the problem of source fragmentation. In other words, resources, resources everywhere but no time to synch may be less of a problem than it has been so far. One nice touch is format flexibility: web, e-textbook, or hard copy options are available. “The first five textbooks in the series–Physics, Sociology, Biology, Concepts of Biology, and Anatomy and Physiology–have been completed, and the Physics and Sociology textbooks are up at openstaxcollege.org. The model is curious:
Using philanthropic funding, Baraniuk and the team behind OpenStax contracted professional content developers to write the books, and each book went through the industry-standard review cycle, including peer review and classroom testing. The books are scope- and sequence-compatible with traditional textbooks, and they contain all of the ancillary materials such as PowerPoint slides, test banks, and homework solutions.
So there is professional level seeding of content while also allowing for wiki-like contribution:
Each book has its own dashboard, called StaxDash. Along with displaying institutions that have adopted the book, StaxDash is also a real-time erratum tracker: Faculty who are using the books are encouraged to submit errors or problems they’ve found in the text. “There’s also the issue of pointing out aspects of the text that need to be updated,” notes Baraniuk, “for example, keeping the Sociology book up-to-date as the Arab Spring continues to evolve. People can post these issues, and our pledge is that we are going to fix any issues as close to ‘in real time’ as possible. These books will be up-to-date in a matter of hours or days instead of years.” When accessing a book through its URL on Connexions, students and faculty will always get the most up-to-date version of the book. Faculty can, however, use the “version control” feature on Connexions to lock in a particular version of the book for use throughout a semester.
If you thought that keeping up with authoritative versions of an ebook and citing it (trust me it is odd to cite to a location in a Kindle book) was messy, this new model will throw you. Then again, that is a small issue.
Group contributions for the latest on an issue and the ability to choose versions is a great idea. Law texts that could update the latest cases or a change in legislation as they happen and then be refined overtime would be wonderful. Of course teachers use other ways to reach these goals. But if crowds/commons style approaches to texts work, we may see better and less expensive versions of textbooks. How the system will mangage disputes about content and education boards’ issues with approval remains to be seen. Still, the promise of this approach should make the miasmic aspects of education boards look silly and create a press for improved ways to have quality content available for educators and most important, for students.
posted by Derek Bambauer
Today, you can’t get to The Oatmeal, or Dinosaur Comics, or XKCD, or (less importantly) Wikipedia. The sites have gone dark to protest the Stop Online Piracy Act (SOPA) and the PROTECT IP Act, America’s attempt to censor the Internet to reduce copyright infringement. This is part of a remarkable, distributed, coordinated protest effort, both online and in realspace (I saw my colleague and friend Jonathan Askin headed to protest outside the offices of Senators Charles Schumer and Kirstin Gillibrand). Many of the protesters argue that America is headed in the direction of authoritarian states such as China, Iran, and Bahrain in censoring the Net. The problem, though, is that America is not alone: most Western democracies are censoring the Internet. Britain does it for child pornography. France: hate speech. The EU is debating a proposal to allow “flagging” of objectionable content for ISPs to ban. Australia’s ISPs are engaging in pre-emptive censorship to prevent even worse legislation from passing. India wants Facebook, Google, and other online platforms to remove any content the government finds problematic.
Censorship is on the march, in democracies as well as dictatorships. With this movement we see, finally, the death of the American myth of free speech exceptionalism. We have viewed ourselves as qualitatively different – as defenders of unfettered expression. We are not. Even without SOPA and PROTECT IP, we are seizing domain names, filtering municipal wi-fi, and using funding to leverage colleges and universities to filter P2P. The reasons for American Internet censorship differ from those of France, South Korea, or China. The mechanism of restriction does not. It is time for us to be honest: America, too, censors. I think we can, and should, defend the legitimacy of our restrictions – the fight on-line and in Congress and in the media shows how we differ from China – but we need to stop pretending there is an easy line to be drawn between blocking human rights sites and blocking Rojadirecta or Dajaz1.
Cross-posted at Info/Law.
January 18, 2012 at 5:31 pm Posted in: Advertising, Architecture, Civil Procedure, Constitutional Law, Culture, Current Events, Cyberlaw, First Amendment, Google & Search Engines, Google and Search Engines, Innovation, Intellectual Property, Media Law, Movies & Television, Politics, Technology, Web 2.0, Wiki Print This Post No Comments
posted by Danielle Citron
Time magazine recently did a true-to-form story on Wikipedia, where guest editors (and our very own featured author) Jonathan Zittrain (see here too), Robert McHenry, Benjamin Mako Hill, and Mike Schroepfer assisted in writing/editing/re-writing a feature entitled Wikipedia’s “Ten Years of Inaccuracy and Remarkable Detail.” As the piece explained, Wikipedia just celebrated its 10th birthday. The site has 17 million entries in more than 250 languages, quite a feat given that Encyclopedia Brittanica only has 120,000 and only in English. The Time wiki-like piece notes that Wikipedia has a “diverse, international body of contributors.”
According to The New York Times, most contributors are male. More specifically, “less than 15 percent of its hundreds of thousands of contributors are female.” This, in turn, has skewed the gender disparity of topics and emphasis. Wikimedia’s executive director Sue Gardner explains that topics favored by girls such as friendship bracelets can seem short when compared with lengthy articles on something boys typically like such as toy soldiers or baseball cards. The New York Times notes that a category with five Mexican feminist writers might not seem so impressive when compared with 45 articles on characters in “The Simpsons.”
Why is this so? Joseph Reagle, a fellow at the Berkman Center for Internet and Society at Harvard and author of “Good Faith Collaboration: The Culture of Wikipedia,” explains that Wikipedia’s early contributors shared “many characteristics with the hard-driving hacker crowd,” including an ideology that “resists any efforts to impose rules or even goals like diversity, as well as a culture that may discourage women.” He notes that adopting an ideology of openess means being “open to very difficult, high-conflict people, even misogynists.” The demographics of Wikipedia’s editors may also stem, in part, from the tendency of women to be “less willing to assert their opinions in public.”
How Wikipedia is now, and has been, responding is worth noting. Sue Gardner told the Times that she hopes to raise the share of women contributors through subtle persuasion and outreach to welcome newcomers to Wikipedia. Dave Hoffman and Salil Mehra’s terrific piece Wikitruth Through Wikiorder demonstrates that the site has already fostered efforts to create a more inclusive environment. As Hoffman and Mehra explain, Wikipedia has an Arbitration Committee whose volunteer members rule on disputes and set forth concrete rules on how users should behave. The Arbitration Committee has sanctioned users who make homophobic, ethnic, racial or gendered attacks or who stalk and harass others. According to Hoffman and Mehra’s empirical study, in cases when either impersonation or anti-social conduct like hateful attacks occur, the Administrative Committee will ban the user in 21% of cases. Wikipedia’s more than 1,500 administrators, in turn, enforce those rules. Wikipedia also permits users to report impolite, uncivil, or other difficult communications with editors in its Wikiquette alerts notice board.
posted by Danielle Citron
Many desperately try to garner online celebrity. They host You Tube channels devoted to themselves. They share their thoughts in blog postings and on social network sites. They post revealing pictures of themselves on Flickr. To their dismay though, no one pays much attention. But for others, the Internet spotlight finds them and mercilessly refuses to yield ground. For instance, in 2007, a sports blogger obtained a picture of a high-school pole vaulter, Allison Stokke, at a track meet and posted it online. Within days, her picture spread across the Internet, from message boards and sport sites to porn sites and social network profiles. Impostors created fake profiles of Ms. Stokke on social network sites, and Ms. Stokke was inundated with emails from interested suitors and journalists. At the time, Ms. Stokke told the Washington Post that the attention felt “demeaning” because the pictures dominated how others saw her rather than her pole-vaulting accomplishments.
Time’s passage has not helped Stokke shake her online notoriety. Sites continuously updated their photo galleries with pictures of Stokkes taken at track meets. Blogs boasted of finding pictures of Stokke at college with headings like “Your 2010 Allison Stokke Update,” “Allison Stokke’s Halloween Cowgirl Outfit Accentuates the Total Package,” and “Only Known Allison Stokke Cal Picture Found.” Postings include obscene language. For instance, a Google search of her name on a safety setting yields 129,000 results while one with no safety setting has 220,000 hits. Encyclopedia Dramatica has a wiki devoted to her (though Wikipedia has faithfully taken down entries about Ms. Stokke).
January 30, 2011 at 6:16 pm Posted in: Cyber Civil Rights, Cyberlaw, Google & Search Engines, Privacy, Privacy (Consumer Privacy), Privacy (Gossip & Shaming), Social Network Websites, Technology, Tort Law, Wiki Print This Post 5 Comments
posted by Dave Hoffman
In Wikitruth Through Wikiorder, Salil Mehra and I detailed the history of Wikipedia’s dispute resolution process. We highlighted the role of Alex Roshuk, a Brooklyn lawyer and site volunteer who played a key early role in the process by suggesting that the site’s dispute resolution process should look like a “very simplified version[s] of the commercial or international arbitration programs of the American Arbitration Association.” When writing the article, I confess I found it ironic that a lawyer proposed such a formal process, and believed that it was evidence that legalism is an inescapable (and dominant) part of American society. I just found Roshuk’s response to our article online. He offers a stinging indictment of the Wikimedia foundation, and what’s come of the dispute resolution system. As he argues:
While I originally suggested in the fall of 2003 that Wikipedia have a structured dispute resolution process, instead of making this process simple and straightforward, ADR atWikipedia has become a complex system that has all kinds of hard to understand rules. Perhaps it is the management of this dispute resolution process (or lack thereof) is what has caused or contributed to a lot of Wikipedia users leaving the project and the ripple effect this system has on the general behavior of editors and administrators whose behavior is mediated by this process . . . After seeing the discussion develop at Wikipedia in the fall of 2003 I saw that there were a lot of people who misunderstood the idea of arbitration, They wanted to make it something formal, like a Wikipedia court system, the ArbCom, as it was called became a place where someone could obtain status in the Wikipedia community, originally by being appointed by Mr. James “Jimbo” Wales, one of the founders of Wikipedia, and later by election. When I suggested this kind of system my intention was to get people to talk, mostly through mediation by a neutral third party, to come to a mutual understanding that editors were all contributing knowledge, not fighting against each other to be “right” or “wrong”.
This view of the pathologies of the Arbitration system isn’t, of course, unique to Roshuk, nor is it really in tension with the story Salil and I set out in Wikitruth. But it is notable that Roshuk has such a dim view of the site’s excessive legalization, and that he attributes the dominance of law to a desire for status and hierarchy, instead of the formal structure of the process itself.
(Image source: Wikilove.)
posted by Danielle Citron
At Balkanization, Professor Marvin Ammori has a thoughtful post on the Wikileaks story. Professor Ammori, who will be guest blogging with us soon, gave me the thumbs up on reproducing his post. Hopefully, it will spark some interesting discussion on CoOp. Here is Ammori’s post:
Many of our nation’s landmark free speech decisions are not about heroes–several are about flag-burners, racists, Klansmen, and those with political views outside the mainstream. And yet we measure our commitment to freedom of speech, in part, by our willingness to protect even their rights despite disagreement with what they say, and why they say it.
The story of Wikileaks publishing U.S. diplomatic cables has become the story of Julian Assange: is he a hero or villain, a high-tech terrorist or enemy combatant? Should the U.S., which may have already empanelled a grand jury in Virginia, prosecute him as a criminal under the Espionage Act of 1917 or under the computer fraud and abuse act?
Though I have spent years advocating for Internet freedom, I don’t think Assange is a hero for leaking these diplomatic cables. According to plausible reports, the leaks have harmed U.S. interests, made the work of U.S. diplomats more difficult, likely endangered lives of allies, and may have set back democracy in Zimbabwe and perhaps elsewhere. Even some of Assange’s friends at Wikileaks are doubting Assange’s heroism: a few left him to launch a rival site and to write a tell-all book. Whatever the harms of secrecy and over-classification, Assange’s actions have caused tremendous damage. No wonder polls show nearly 60% of Americans believe the U.S. should arrest Assange and charge him with a crime.
My initial reaction was similar. I thought that if a case could be made against Assange, one should be made.
But, as time passed, the political and legal downsides of prosecution came into clearer focus, and I am rethinking that initial reaction. Despite still believing Assange’s actions have been harmful, I have now come to the opposite conclusion—not for the benefit of Assange, but for the benefit of Americans and of the United States.
Prosecuting Assange could do more harm than good for our freedom of the press and would inflict further harm on diplomatic effectiveness. Despite the appeal of prosecuting Assange, it is not worth the cost. We will not get the cables back. We will not deter aspiring Wikileakers, as both our allies and our enemies know. We will, as Dean Geoffrey Stone has best articulated, likely sacrifice established principles of freedom of the press in doing so.
Here are some thoughts on why we should think twice about prosecuting Assange, categorized by harms to the U.S.’s freedom of the press and then harms to America’s diplomatic effectiveness. And, in advance, I thank the many scholars, policy experts, and friends who took the time to give me thoughts on earlier drafts of this post. Read the rest of this post »
posted by Frank Pasquale
Don’t worry, it’s not another prolix post from me, just commentary on Jack Goldsmith’s Seven Thoughts on Wikileaks and Lovink & Riemens’s Twelve theses on WikiLeaks. (And here’s an FAQ for those confused by the whole controversy.)
Goldsmith, who takes cybersecurity very seriously, nevertheless finds himself “agreeing with those who think Assange is being unduly vilified.” He believes that “it is not obvious what law he has violated,” and Geoff Stone today said that many Lieberman-inspired efforts to expand the Espionage Act to include Assange’s conduct would be unconstitutional. Goldsmith asks:
What if there were no wikileaks and Manning had simply given the Lady Gaga CD to the Times? Presumably the Times would eventually have published most of the same information, with a few redactions, for all the world to see. Would our reaction to that have been more subdued than our reaction now to Assange? If so, why?
Lovink & Riemens provide something of an answer:
Read the rest of this post »
December 11, 2010 at 9:39 pm Posted in: Anonymity, Current Events, Cyber Civil Rights, First Amendment, Google & Search Engines, Government Secrecy, Privacy, Privacy (Electronic Surveillance), Privacy (National Security), Science Fiction, Wiki Print This Post 2 Comments
posted by Salil Mehra
First off, thanks to Concurring Opinions and Danielle Citron for hosting this online symposium on Jonathan Zittrain’s The Future of the Internet – and How to Stop it. Before I launch into my own thoughts, I want to add my own version of the praise that the book has already won. It is an immensely readable work that succeeds in showing us where we’ve been, how we got to where we are, and the steps to take to avoid going where we’d rather not be.
I have three brief points, involving a comparison with Japan, some thoughts about competition, consumer protection and innovation, and finally, a somewhat different take on the lessons of Wikipedia.
This symposium is incredibly timely, particularly given the concern in recent weeks about the Google/Verizon agreement. In TFOTI, Zittrain highlights the risks that threaten the Internet’s future, and explains how the net neutrality debate is in some ways a mismatch for those risks. For example, he points out that the migration from the Internet to, in his words, tethered appliances like the iPhone and TiVo, ultimately provide an end-run around net neutrality on the Internet (pp. 177-185). Accordingly, he argues that preserving generativity is a better-tailored principle.
The lead in The Economist this week also takes on the Google/Verizon agreement, and critiques net neutrality from a different angle calling America’s “vitriolic net-neutrality debate” “a reflection of the lack of competition in broadband access.” If you’re reading this symposium, you probably already know, possibly because you read this, that in many other industrialized countries incumbent telcos were forced years ago – and not just in a superficial way – to open up wholesale broadband to competitors.
I’m in Tokyo this academic year thanks to Temple’s long reach across the globe and to my gracious hosts at Keio University Law School. I’ve been travelling to Japan repeatedly since the late 1980s, and one of the changes I’ve been struck by is how a country that in the 1990s was generally held to be well behind the U.S. in telecommunications now seems ahead in broadband and mobile Internet. Read the rest of this post »
posted by Jonathan Zittrain
I wrote the Future of the Internet — And How to Stop It, and its precursor law review article the Generative Internet, between 2004 and 2007. I wanted to capture a sense of just how bizarre the Internet — and the PC environment — were. How much the values and assumptions of, metaphorically, dot-org and dot-edu, rather than just dot-com, were built into the protocols of the Internet and the architecture of the PC. The amateur, hobbyist, backwater origins of the Internet and the PC were crucial to their success against more traditional counterparts, but also set the stage for a new host of problems as they became more popular.
The designers and makers of the Internet and PC platforms did not expect to come up with the applications for each — they figured unknown others would do that. So, unlike CompuServe, AOL, or Prodigy, the Internet didn’t have a main menu. And once for-profit ISPs started rolling the Internet out to anyone willing to subscribe, there came to be a critical mass of eyeballs ready to experience varieties of content and services — the providers of which didn’t have to negotiate a business deal with some Internet Overseer the way they did for CompuServe et al. Some content and services could be paid for, at least as soon as credit cards could function cheaply online, and other could be free — either because of a separate business model like advertising, or because the provider didn’t feel inclined to monetize visiting eyeballs. Tim Berners-Lee could invent the World Wide Web and have it run as just another application, seeking neither a patent on its workings nor an architecture for it that placed him in a position of control. Today, of course, the Web is so ubiquitous that people often confuse it with the Internet itself.
When bad apples emerge on an unmediated platform — and they do as soon as there are enough people using it to make it worth it to subvert it — it can be difficult to deal with them. If someone spams you on Facebook, the first step is to make it a customer service issue — complain to Facebook, and they can discipline the account. If someone spams you on email, it’s much trickier, because there’s no Email Manager — just lots of email servers, some big, some little, and many of them with accounts hacked by others. That’s one reason why a newer generation of Internet users prefers Facebook or Twitter messaging to old fashioned email. Same for the PC itself: with no PC Manager, there’s no easy way to get help or exact justice when exposed to malware. I worried that malware in particular, and cybersecurity in general, would be a fulcrum point in pushing “regular” people away from the happenstance of generative platforms designed by nerds who figured they could worry about security later. Hence a migration to less generative platforms managed like services rather than products.
I understand and sympathize with that migration. But it’s important to recognize its downsides — particularly if one is among the libertarian set, which has been comprised some of the most vocal critics of the Future of the Internet. Whether software developer or user, volunteering control over one’s digital environment to a Manager means that the manager can change one’s experience at any time — or worse, be compelled to by outside pressures. I write about this prospect at length here. The famously ungovernable Internet suddenly becomes much more governable, an outcome most libertarian types would be concerned about. Many Internet freedom proponents aren’t willing to argue for or trust those freedoms to a “mere” political process; they prefer to see them de facto guaranteed by a computing environment largely immune to regulation. Read the rest of this post »
posted by Danielle Citron
It’s an honor to introduce Jonathan Zittrain and the participants in our online symposium on The Future of the Internet–And How to Stop It. From tomorrow through Wednesday, we will be discussing Zittrain’s important book, which warns of a shift in the Internet’s trajectory from a wide-open Web of creative anarchy to a series of closed platforms that will curtail innovation. As Zittrain predicted, “tethered appliances” dominate our information ecosystem today. We increasingly trade generative technologies like PCs that permit experimentation for sterile, reliable appliances like mobile phones, video game consoles, and book readers that limit or forbid tinkering. Zittrain attributes this phenomenon to the unfortunate, yet now predictable, pathologies that generativity enables. Although generative technologies facilitate innovation, they permit the spread of spam, viruses, malware, and the like.
According to Zittrain, the Internet is at a crucial inflection point. Rather than sustaining the wide-open Web of creativity and disruption, the Internet may in time become a series of controlled networks that limit innovation and enable inappropriate governmental and corporate surveillance. Zittrain offers various strategies to forestall such scenarios, including tools to empower users to solve problems that drive users to sterile appliances and networks. Zittrain argues that our information ecology functions best with generative technology at its core.
The Future of the Internet raises a host of fascinating and timely questions. Is the future of the Internet indeed bleak? As this month’s cover story for Wired asks: is Zittrain’s dark future only likely in the “commercial content side” of the digital economy? Might a healthy balance of generative technologies and tethered appliances emerge, or is the move to appliancized networks a grab for control that will be difficult to shake? Will non-generative technologies impact our democratic commitments and cultural values? Should we remain committed to protecting generativity? Are there alternative strategies for preserving innovation besides the ones that Zittrain offers?
To consider these and other issues, we have invited an all-star cast of thinkers:
My co-bloggers will join this conversation as well. In a post in April 2009, co-blogger Deven Desai started our conversation about The Future of the Internet–And How to Stop It. Since that time, the wild-fire adoption of tethered appliances, iPod applications, iTunes, and the like have shown just how prophetic and important Zittrain’s book is. We are excited for the discussion to begin.
September 6, 2010 at 2:58 pm Posted in: Administrative Announcements, Anonymity, Architecture, Cyberlaw, Google & Search Engines, Privacy, Symposium (Future of Internet), Technology, Web 2.0, Wiki Print This Post 4 Comments
posted by Dave Hoffman
Almost four years ago, I blogged at Prawfs about a weird dispute on Wikipedia about the Kelo case. I wrote that “[t]here is a whole ADR and conflict resolution system being set up behind the scenes, in the absence of (a) money; (b) the Bar; or (c) personal contact. And we don’t have to go to Shasta County for months on end to see it.”
Wiki’s DR process continued to fascinate me, and I eventually teamed up with Temple’s Salil Mehra, a comparative IP scholar, to write about the system. We’ve finished just finished a draft, which starts with the following snippet:
Charles Darwin and Abraham Lincoln were both born on February 12, 1809. When some individuals hear about this coincidence, it seems remarkable. To others, it is mundane. To Wikipedia editors working on the encyclopedia’s articles about Darwin and Lincoln, the factoid was the subject of a contentious dispute resolution process that encompassed two polls, outside editor comments, a request for mediation, and a formal arbitration proceeding that generated over 30,000 words in evidentiary submissions and thousands of volunteer man-hours.
The problem motivating the fracas was whether or not the shared birthday merited inclusion in the Wikipedia’s biography of Darwin. Because Wikipedia’s editing process is open, editors who disagree might endlessly recycle their views, leading to unstable articles, entrenched disagreement and a loss of initiative, altogether destroying the site’s utility. In response, Wikipedia has developed a volunteer-run, highly articulated, dispute resolution system. That system starts with the informal, guided, exchange of views, muddles through mediation, and terminates in an Arbitration Committee, which hears evidence presented by the parties before issuing findings of fact and conclusions of policy and law. Such decisions, organized by volunteer arbitration clerks and disseminated by volunteer reporters, have created a virtual Wiki-common law.
As the result of the binding arbitration in the Darwin Birthday Dispute, two editors were banned from the site for a month for their lack of cooperation with others, and one was further prohibited from editing either Darwin’s or Lincoln’s article. A third individual was formally thanked by the arbitrators for his work as a counselor to one of the banned parties. The Arbitrators, per their usual rule, did not resolve the content of the dispute: non-banned parties were free to continue testing whether the Emancipator and the Scientist’s shared birthday was worthy of note.
There are at least two separate levels of strangeness about this story.
• Why do people spend time editing Wikipedia articles and why they would care enough about this particular fact to disagree?
• Why does Wikipedia have a dispute resolution system that doesn’t resolve disputes?
Interested in reading more? Download our draft, which just went up on SSRN. Or, if you are a law review editor, check your inbox. We’re in there!
posted by Daniel Solove
I recently placed Chapter 1 of my new book, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet (Yale Univ. Press, 2007) on SSRN. It can be downloaded for free.
posted by Dave Hoffman
Since early this year, and for the first extended period in Wikipedia’s history, the activity rate of the Wikipedia community has been declining. This can be seen in the rate of editing articles (-17%), the rate of new account registration (-25%), blocks (-30%), protections (-30%), uploads (-10%), article deletions (-25%), etc. Some exceptions are the article creation rate (+25%) and image deletions (+80%), but overall the community appears to be doing less now than it was 6 months ago.
If these data are reliable, you’ve got to wonder what happened. Is it the Essjay-related credibility problem, as the author of the post suggests, or is it a breakdown of Wikipedia’s dispute resolution system? I’m tempted toward the latter explanation as at least a contributing factor, not least because it fits part of the story I’m writing in a jointly authored article about Wikipedia’s dispute resolution process. (Previewed in this blog post.) In particular, the number of “reverts” is on the rise, reducing the value of thoughtful editing and community involvement. Revert wars, as a form of unproductive low-level conflict between users, are just what the dispute resolution system was designed to ameliorate.
Update: For more evidence of the thesis, check out this post from later in the same thread (emphasis added):
Personally, I would suggest that Wikipedia has indeed become more bureaucratic, and it will progress little further until a rethink of the core ideology is considered, particularly wrt. to how to derive/amend policy, core policy issues, handling bias or concepts of truth, dispute resolution and what to do when there isn’t consensus (i.e. no consensus for the status quo, no consensus for proposed or active changes). The whole idea that Wikipedia acts by consensus is a sham. It’s not a democracy of course either, it’s not even anarchy, or specifically authority-driven(dictatorial). In individual cases it’s whatever people can get away with. That’s not a good concept of consensus (i.e. “what sticks is there by tacit agreement”); it ignores the fact that rational people will eventually give up rather than deal with bullies and morons.
posted by Daniel Solove
I‘m very excited to announce that my new book, The Future of Reputation: Gossip, Rumor, and Privacy, is now hot off the presses! Copies are now in stock and available on Amazon.com and Barnes & Noble’s website. Copies will hit bookstores in a few weeks.
From the book jacket:
Teeming with chatrooms, online discussion groups, and blogs, the Internet offers previously unimagined opportunities for personal expression and communication. But there’s a dark side to the story. A trail of information fragments about us is forever preserved on the Internet, instantly available in a Google search. A permanent chronicle of our private lives—often of dubious reliability and sometimes totally false—will follow us wherever we go, accessible to friends, strangers, dates, employers, neighbors, relatives, and anyone else who cares to look. This engrossing book, brimming with amazing examples of gossip, slander, and rumor on the Internet, explores the profound implications of the online collision between free speech and privacy.
Daniel Solove, an authority on information privacy law, offers a fascinating account of how the Internet is transforming gossip, the way we shame others, and our ability to protect our own reputations. Focusing on blogs, Internet communities, cybermobs, and other current trends, he shows that, ironically, the unconstrained flow of information on the Internet may impede opportunities for self-development and freedom. Long-standing notions of privacy need review, the author contends: unless we establish a balance between privacy and free speech, we may discover that the freedom of the Internet makes us less free.
For quite some time, I’ve been thinking about the issue of how to balance the privacy and free speech issues involved with blogging and social networking sites. In the book, I do my best to propose some solutions, but my primary goal is to spark debate and discussion. I’m aiming to reach as broad an audience as possible and to make the book lively yet educational. I hope I’ve achieved these goals.
I welcome any feedback. Please let me know what you think of the book, as I’d be very interested in your thoughts.
October 2, 2007 at 12:31 am Posted in: Articles and Books, First Amendment, Google & Search Engines, Media Law, Privacy, Privacy (Consumer Privacy), Privacy (Gossip & Shaming), Technology, Wiki Print This Post 3 Comments
posted by Neil Richards
Dave’s post on WikiScanner reminds me of an article last week in The Times about the other juicy revelations that Wiki-Scanner has uncovered, such as self-editing by the CIA, the Vatican, the British Labour Party, and a number of big corporations. The article goes on to argue:
There is no necessary reason that Wikipedia’s continual revisions enhance knowledge. It is quite as conceivable that an early version of an entry in Wikipedia will be written by someone who knows the subject, and later editors will dissipate whatever value is there. Wikipedia seeks not truth but consensus, and like an interminable political meeting the end result will be dominated by the loudest and most persistent voices.
This is a good (if a bit grumpy) criticism of the Wiki model. Wikis do seem to gravitate towards consensus, and as such are really efficient aggregators of facts. Where facts are not in dispute, Wikis do a fantastic job. For example, if you wish to learn about The Simpsons, Doctor Who, or the geneaology of the House of Windsor, Wikipedia is a great resource.
But for the important questions, it is quite different. Any time judgment or contested notions of truth come into play, people are quite naturally going to assert their own view of reality. Wikipedia is just another context (albeit a highly-manipulable one) in which these fights play out. In addition to consensus, money, energy, and persistence can affect how the “truth” is presented. It probably shouldn’t be surprising that Wikipedia entries are being manipulated in this way. If anything, it’s more surprising that people seem to believe that Wikipedia entries can give them easy truth on complicated questions that require judgment, reflection, interpretation, and thought. Even Encyclopedia Britannica can’t do that, though it may be a little less subject to manipulation in the name of good PR. But then again, Britannica is probably not as strong on Gary Coleman’s appearance on the Simpsons (episode 235, in case you were wondering).
posted by Dave Hoffman
WikiScanner is this week’s killer-app. Prompted by a short post on Xoxohth, I decided to see whether our nation’s busy law firm lawyers are spending their downtime editing Wikipedia entries. And, of course, they are. Of the thousands of edits I saw, I decided to focus on one topic matter: editing law firm webpages. Not surprisingly, law firms are using Wikipedia to burnish their reputations and trash their competitors. Here are a few examples:
Wachtell’s edits (Editing Kramer Levin, Cravath, and Wachtell)
S&C’s edits (editing S&C)
Skadden’s edits (editing Jones Day and Skadden)
Baker’s edits (editing Baker)
Jones Day’s edits (editing Jones Day)
Latham’s edits (editing Latham and Cravath)
Sidley’s edits (editing Ropes, Sidley, and asserting that Sidley is a white shoe firm)
Shearman’s edits (editing Shearman)
White and Case’s edits (adding W&C as a white shoe firm)
Morgan’s edits (editing Morgan)
Mayer Brown’s edits (adding Mayer as a part of “Big Law”)
Davis Polk’s edits (editing Davis)
There is quite a bit more in these records. Honors go to the first reader who can find an edit by a lawfirm of a client’s webpage that either deals with a then-pending legal dispute or offers a critique or negative comment.
posted by Dave Hoffman
Check out this bizarre story: a wikipedia administrator allegedly has distorted editing of the site’s article on the Entebbe operation, because, this site alleges, she is a spy for an unidentified national government.
Believable? Who knows. I’ve got to think that a spy agency that spends its human capital editing wikipedia entries instead of, say, finding the nation’s enemies and introducing them to targeted justice, has a misplaced set of priorities. Even if the agency were to suppress, in one medium, some aspect of the “truth” about its activities, the internet is like a vast gopher game: suppress a fact here, and it pops up there.
posted by Daniel Solove
One of the virtues of the online encyclopedia Wikipedia is that it can reflect new information very quickly after it becomes known. But there’s a rather odd development in the case of wrestler Chris Benoit’s murder of his family and suicide. From the AP:
Investigators are looking into who altered pro wrestler Chris Benoit’s Wikipedia entry to mention his wife’s death hours before authorities discovered the bodies of the couple and their 7-year-old son.
Benoit’s Wikipedia entry was altered early Monday to say that the wrestler had missed a match two days earlier because of his wife’s death.
A Wikipedia official, Cary Bass, said Thursday that the entry was made by someone using an Internet protocol address registered in Stamford, Connecticut, where World Wrestling Entertainment is based.
An IP address, a unique series of numbers carried by every machine connected to the Internet, does not necessarily have to be broadcast from where it is registered. The bodies were found in Benoit’s home in suburban Atlanta, and it’s not known where the posting was sent from, Bass said. . . .
Benoit’s page on Wikipedia, a reference site that allows users to add and edit information, was updated at 12:01 a.m. Monday, about 14 hours before authorities say the bodies were found. The reason he missed a match Saturday night was “stemming from the death of his wife Nancy,” it said.
posted by Dave Hoffman
In July of 2006, I argued here that the law review submission process would be aided by a Wiki. The purpose of the page: to collect information on submissions, accepted articles, board preferences, and other useful tips.
So I started a place where folks could work together to create a public good: lawreviews.wikispaces.com
A reader who is “a bit of a wiki-cynic” reminded me of the project recently. The page seems to have withered on the vine. What happened folks? Is this project less socially useful than, say, a description of the cell nucleus, today’s featured Wikipedia article?