posted by Andrew Blair-Stanek
Intellectual property has become a major tax-avoidance vehicle for multinationals. Front-page articles in the New York Times and Wall Street Journal have detailed how IP-heavy companies like Apple, Google, and Big Pharma play games with their IP to avoid taxes on a massive scale. For example, Apple uses IP-based tax-avoidance strategies to reduce its effective tax rate to approximately 8%, well below the statutory 35% corporate tax rate (and well below most middle-class Americans’ tax rates).
Two characteristics of IP make it the ideal tax-avoidance vehicle. First, the uniqueness of every piece of IP makes its fair market value extremely hard to establish, allowing taxpayers to choose whatever valuations result in the least tax. Second, unlike workers or physical assets like factories or stores, IP can easily be moved to tax havens via mere paperwork.
But Starbucks is a bricks-and-mortar retailer dependent upon physical presence in high-tax countries. It wouldn’t seem to be in a position to use these IP-based tax tricks. Yet in an excellent, eye-opening paper, Edward Kleinbard (USC) delves into the strategies that Starbucks uses to substantially reduce its worldwide tax burden. Most interestingly, Starbucks puts IP like trademarks, proprietary roasting methods, operational expertise, and store trade dress into low-tax jurisdictions. Kleinbard cogently observes that the ability of a bricks-and-mortar retailer like Starbucks to play such games demonstrates how deep the flaws run in current U.S. and international tax policy.
posted by David Schwartz
In the last decade or so, patent litigation in the United States has undergone enormous changes. Perhaps most profound is the rise in enforcement of patents held by people and entities who don’t make any products or otherwise participate in the marketplace. Some call these patent holders ‘non-practicing entities’ (NPEs), while others use the term ‘patent assertion entities’ (PAEs), and some pejoratively refer to some or all of these patent holders as ‘trolls.’ These outsiders come in many different flavors: individual inventors, universities, failed startups, and holding companies that own a patent or family of patents.
This post is about a particular type of outsider that is relatively new: the mass patent aggregator. The mass patent aggregator owns or controls a significant number of patents – hundreds or even thousands – which it acquired from different sources, including from companies that manufacture products. These mass aggregators often seek to license their portfolios to large practicing entities for significant amounts of money, sometimes using infringement litigation as the vehicle. Aggregators often focus their portfolios on certain industries such as consumer electronics.
Mass aggregator patent litigation and ordinary patent litigation appear to differ in one important aspect. Mass aggregators sue on a few patents in their portfolio, which serve as proxies for the quality of their entire portfolio. The parties use the court’s views of the litigated patents to determine how to value the full patent portfolio. By litigating only a small subset of their portfolio, the aggregator and potential licensee avoid the expense of litigating all of the patents. But the court adjudicates the dispute completely oblivious to the proxy aspect of the litigation. Instead, the court handles it like every other case – by analyzing the merits of the various claims and defenses.
If the court understood the underlying dispute was litigation-by-proxy, would it (or could it) proceed any differently? I will discuss my thoughts on this question in another blog post. For now, I have a question: does proxy litigation occur in other areas of law?
May 27, 2013 at 11:42 pm Tags: aggregator, Intellectual Property, non-practicing entity, npe, patent, patent litigation, proxy litigation Posted in: Courts, Intellectual Property Print This Post 6 Comments
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Dan L. Burk entitled Anticipating Patentable Subject Matter. Professor Burk argues that the fact that something might be found in nature should not necessarily preclude its patentability:
The Supreme Court has added to its upcoming docket Association for Molecular Pathology v. Myriad Genetics, Inc., to consider the question: “Are human genes patentable?” This question implicates patent law’s “products of nature” doctrine, which excludes from patentability naturally occurring materials. The Supreme Court has previously recognized that “anything under the sun that is made by man” falls within patentable subject matter, implying that things under the sun not made by man do not fall within patentable subject matter.
One of the recurring arguments for classifying genes as products of nature has been that these materials, even if created in the laboratory, could sometimes instead have been located by scouring the contents of human cells. But virtually the same argument has been advanced and rejected in another area of patent law: the novelty of patented inventions. The rule in that context has been that we reward the inventor who provides us with access to the materials, even if in hindsight they might have already been present in the prior art. As a matter of doctrine and policy, the rule for patentable subject matter should be the same.
“I can find the invention somewhere in nature once an inventor has shown it to me” is clearly the wrong standard for a patent system that hopes to promote progress in the useful arts. The fact that a version of the invention may have previously existed, unrecognized, unavailable, and unappreciated, should be irrelevant to patentability under either novelty or subject matter. The proper question is: did the inventor make available to humankind something we didn’t have available before? On this standard, the reverse transcribed molecules created by the inventors in Myriad are clearly patentable subject matter.
February 21, 2013 at 10:30 am Tags: biology, Intellectual Property, law and science, nature, patent law, patents, science, Supreme Court Posted in: Intellectual Property, Law Rev (Stanford), Supreme Court Print This Post No Comments
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Lee Petherbridge and Jason Rantanen entitled In Memoriam Best Mode. Professors Petherbridge and Rantanen discuss an overlooked element of the Leahy-Smith America Invents Act—the de facto elimination of the requirement that inventors include a description of the “best mode” of practicing their inventions in patent applications:
On September 16, 2011, President Obama signed into law the Leahy-Smith America Invents Act. It embodies the most substantial legislative overhaul of patent law and practice in more than half a century. Commentators have begun the sizable task of unearthing and calling attention to the many effects the Act may have on the American and international innovation communities. Debates have sprung up over the consequences to inventors small and large, and commentators have obsessed over the Act’s so-called “first-to-file” and “post-grant review” provisions. Lost in the frenzy to understand the consequences of the new Act has been the demise of patent law’s “best mode” requirement.
The purpose of this short essay is to draw attention to a benefit the best mode requirement provides—or perhaps “provided” would be a better word—to the patent system that has not been the subject of previous discussion. The benefit we describe directly challenges the conventional attitude that best mode is divorced from the realities of the patent system and the commercial marketplace. Our analysis suggests that patent reformers may have been much too quick to dismiss best mode as a largely irrelevant, and mostly problematic, doctrine.
Even while best mode can produce patent disclosures that have broader prior art effect, it simultaneously can cooperate with the doctrines of claim construction and written description to produce patents with claims that may be construed as having a narrower scope. Detailed descriptions of especially effective embodiments of an invention can have the effect of introducing elements that courts often find, either through the application of claim construction or written description doctrines, to be essential elements of an invention. Competitors that do not employ such essential elements are not infringers. Thus, best mode can further help establish and maintain the public domain by limiting the amount of information restricted by patents, thereby increasing the distance between bubbles of patent-restricted information.
posted by Ted Striphas
I first happened across Julie Cohen’s work around two years ago, when I started researching privacy concerns related to Amazon.com’s e-reading device, Kindle. Law professor Jessica Littman and free software doyen Richard Stallman had both talked about a “right to read,” but never was this concept placed on so sure a legal footing as it was in Cohen’s essay from 1996, “A Right to Read Anonymously.” Her piece helped me to understand the illiberal tendencies of Kindle and other leading commercial e-readers, which are (and I’m pleased more people are coming to understand this) data gatherers as much as they are appliances for delivering and consuming texts of various kinds.
Truth be told, while my engagement with Cohen’s “Right to Read Anonymously” essay proved productive for this particular project, it also provoked a broader philosophical crisis in my work. The move into rights discourse was a major departure — a ticket, if you will, into the world of liberal political and legal theory. Many there welcomed me with open arms, despite the awkwardness with which I shouldered an unfamiliar brand of baggage trademarked under the name, “Possessive Individualism.” One good soul did manage to ask about the implications of my venturing forth into a notion of selfhood vested in the concept of private property. I couldn’t muster much of an answer beyond suggesting, sheepishly, that it was something I needed to work through.
It’s difficult and even problematic to divine back-story based on a single text. Still, having read Cohen’s latest, Configuring the Networked Self, I suspect that she may have undergone a crisis not unlike my own. The sixteen years spanning “A Right to Read Anonymously” and Configuring the Networked Self are enormous. I mean that less in terms of the time frame (during which Cohen was highly productive, let’s be clear) than in terms of the refinement in the thinking. Between 1996 and 2012 you see the emergence of a confident, postliberal thinker. This is someone who, confronted with the complexities of everyday life in highly technologized societies, now sees possessive individualism for what it is: a reductive management strategy, one whose conception of society seems more appropriate to describing life on a preschool playground than it does to forms of interaction mediated by the likes of Facebook, Google, Twitter, Apple, and Amazon.
In this Configuring the Networked Self is an extraordinary work of synthesis, drawing together a diverse array of fields and literatures: legal studies in its many guises, especially its critical variants; science and technology studies; human and computer interaction; phenomenology; post-structuralist philosophy; anthropology; American studies; and surely more. More to the point it’s an unusually generous example of scholarly work, given Cohen’s ability to see in and draw out of this material its very best contributions.
I’m tempted to characterize the book as a work of cultural studies given the central role the categories culture and everyday life play in the text, although I’m not sure Cohen would have chosen that identification herself. I say this not only because of the book’s serious challenges to liberalism, but also because of the sophisticated way in which Cohen situates the cultural realm.
This is more than just a way of saying she takes culture seriously. Many legal scholars have taken culture seriously, especially those interested in questions of privacy and intellectual property, which are two of Cohen’s foremost concerns. What sets Configuring the Networked Self apart from the vast majority of culturally inflected legal scholarship is her unwillingness to take for granted the definition — you might even say, “being” — of the category, culture. Consider this passage, for example, where she discusses Lawrence Lessig’s pathbreaking book Code and Other Laws of Cyberspace:
The four-part Code framework…cannot take us where we need to go. An account of regulation emerging from the Newtonian interaction of code, law, market, and norms [i.e., culture] is far too simple regarding both instrumentalities and effects. The architectures of control now coalescing around issues of copyright and security signal systemic realignments in the ordering of vast sectors of activity both inside and outside markets, in response to asserted needs that are both economic and societal. (chap. 7, p. 24)
What Cohen is asking us to do here is to see culture not as a domain distinct from the legal, or the technological, or the economic, which is to say, something to be acted upon (regulated) by one or more of these adjacent spheres. This liberal-instrumental (“Netwonian”) view may have been appropriate in an earlier historical moment, but not today. Instead, she is urging us to see how these categories are increasingly embedded in one another and how, then, the boundaries separating the one from the other have grown increasingly diffuse and therefore difficult to manage.
The implications of this view are compelling, especially where law and culture are concerned. The psychologist Abraham Maslow once said, “it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.” In the old, liberal view, one wielded the law in precisely this way — as a blunt instrument. Cohen, for her part, still appreciates how the law’s “resolute pragmatism” offers an antidote to despair (chap. 1, p. 20), but her analysis of the “ordinary routines and rhythms of everyday practice” in an around networked culture leads her to a subtler conclusion (chap. 1, p. 21). She writes: “practice does not need to wait for an official version of culture to lead the way….We need stories that remind people how meaning emerges from the uncontrolled and unexpected — stories that highlight the importance of cultural play and of spaces and contexts within which play occurs” (chap. 10, p. 1).
It’s not enough, then, to regulate with a delicate hand and then “punt to culture,” as one attorney memorably put it an anthropological study of the free software movement. Instead, Cohen seems to be suggesting that we treat legal discourse itself as a form of storytelling, one akin to poetry, prose, or any number of other types of everyday cultural practice. Important though they may be, law and jurisprudence are but one means for narrating a society, or for arriving at its self-understandings and range of acceptable behaviors.
Indeed, we’re only as good as the stories we tell ourselves. This much Jaron Lanier, one of the participants in this week’s symposium, suggested in his recent book, You Are Not a Gadget. There he showed how the metaphorics of desktops and filing, generative though they may be, have nonetheless limited the imaginativeness of computer interface design. We deserve computers that are both functionally richer and experientially more robust, he insists, and to achieve that we need to start telling more sophisticated stories about the relationship of digital technologies and the human body. Lousy stories, in short, make for lousy technologies.
Cohen arrives at an analogous conclusion. Liberalism, generative though it may be, has nonetheless limited our ability to conceive of the relationships among law, culture, technology, and markets. They are all in one another and of one another. And until we can figure out how to narrate that complexity, we’ll be at a loss to know how to live ethically, or at the very least mindfully, in an a densely interconnected and information rich world. Lousy stories make for lousy laws and ultimately, then, for lousy understandings of culture.
The purposes of Configuring the Networked Self are many, no doubt. For those of us working in the twilight zone of law, culture, and technology, it is a touchstone for how to navigate postliberal life with greater grasp — intellectually, experientially, and argumentatively. It is, in other words, an important first chapter in a better story about ordinary life in a high-tech world.
posted by Brett Frischmann
Thank you to Marvin for an excellent article to read and discuss, and thank you Concurring Opinions for providing a public forum for our discussion.
In the article, the critical approach that Marvin takes to challenge the “standard” model of the First Amendment is really interesting. He claims that the standard model of the First Amendment focuses on preserving speakers’ freedom by restricting government action and leaves any affirmative obligations for government to sustain open public spaces to a patchwork of exceptions lacking any coherent theory or principles. A significant consequence of this model is that open public spaces for speech—I want to substitute “infrastructure” for “spaces”–are marginalized and taken for granted. My forthcoming book—Infrastructure: The Social Value of Shared Resources–explains why such marginalization occurs in this and various other contexts and develops a theory to support the exceptions. But I’ll leave those thoughts aside for now and perhaps explore them in another post. And I’ll leave it to the First Amendment scholars to debate Marvin’s claim about what is the standard model for the First Amendment.
Instead, I would like to point out how a similar (maybe the same) problem can be seen in the Supreme Court’s most recent copyright opinion. In Golan v. Holder , Justice Ginsburg marginalizes the public domain in a startlingly fashion. Since it is a copyright case, the “model” is flipped around: government is empowered to grant exclusive rights (and restrict some speakers’ freedom) and any restrictions on the government’s power to do so is limited to narrow exceptions, i.e., the idea-expression distinction and fair use. A central argument in the case was that the public domain itself is another restriction. The public domain is not expressly mentioned in the IP Clause of the Constitution, but arguably, it is implicit throughout (Progress in Science and the Useful Arts, Limited Times). Besides, the public domain is inescapably part of the reality that we stand on the shoulders of generations of giants. Most copyright scholars believed that Congress could not grant copyright to works in the public domain (and probably thought that the issue raised in the case – involving restoration for foreign works that had not been granted copyright protection in the U.S — presented an exceptional situation that might be dealt with as such). But the Court declined to rule narrowly and firmly rejected the argument that “the Constitution renders the public domain largely untouchable by Congress.” In the end, Congress appears to have incredibly broad latitude to exercise its power, limited only by the need to preserve the “traditional contours.”
Of course, it is much more troublesome that the Supreme Court (rather than scholars interpreting Supreme Court cases) has adopted a flawed conceptual model that marginalizes basic public infrastructure. We’re stuck with it.
posted by Stanford Law Review
The Stanford Law Review Online has just published a piece by Mark Lemley, David S. Levine, and David G. Post on the PROTECT IP Act and the Stop Online Piracy Act. In Don’t Break the Internet, they argue that the two bills — intended to counter online copyright and trademark infringement — “share an underlying approach and an enforcement philosophy that pose grave constitutional problems and that could have potentially disastrous consequences for the stability and security of the Internet’s addressing system, for the principle of interconnectivity that has helped drive the Internet’s extraordinary growth, and for free expression.”
These bills, and the enforcement philosophy that underlies them, represent a dramatic retreat from this country’s tradition of leadership in supporting the free exchange of information and ideas on the Internet. At a time when many foreign governments have dramatically stepped up their efforts to censor Internet communications, these bills would incorporate into U.S. law a principle more closely associated with those repressive regimes: a right to insist on the removal of content from the global Internet, regardless of where it may have originated or be located, in service of the exigencies of domestic law.
Note: Corrected typo in first paragraph.
December 19, 2011 at 3:14 am Tags: banks, credit card companies, DNS, DNS filtering, domain name seizures, domain name servers, domain names, financial institutions, Intellectual Property, Internet, internet security, internet stability, IP, IP addresses, IP rights, online advertisers, PROTECT IP Act, search engine censorship, search engines, SOPA, Stop Online Piracy Act, World Wide Web Posted in: Current Events, Cyberlaw, First Amendment, Google & Search Engines, Google and Search Engines, Innovation, Intellectual Property, International & Comparative Law, Law Rev (Stanford), Law School (Law Reviews), Movies & Television, Property Law, Social Network Websites Print This Post One Comment
posted by Lea Shaver
It’s become a truism in IP scholarship to introduce a discussion by acknowledging the remarkable recent rise in popular, scholarly, and political interest in our field. Thus readers will recognize a familiar sentiment in the opening line of Amy Kapczynski and Gaëlle Krikorian’s new book:
A decade or two ago, the words “intellectual property” were rarely heard in polite company, much less in street demonstrations or on college campuses. Today, this once technical concept has become a conceptual battlefield.
Only recently, however, has it become possible to put this anecdotal consensus to empirical test.
In December 2010, Google launched ngrams, a simple tool for searching its vast repository of digitized books and charting the frequency of specific terms over time. (It controls for the fact that there are many more books being published today.)
If you haven’t already played around with this tool to explore your own topics of interest, you should. While you’re at it, take a stab at explaining why writing on the Supreme Court rose steadily until approximately 1935 and has dropped just as steadily ever since!
Back to our topic, though. What does this data reveal about the prominence of intellectual property in published discourse?
I generated two graphs, both charting the terms “intellectual property,” “copyright,” “patent,” and “trademark.” First, the longview:
Read the rest of this post »
February 3, 2011 at 2:25 pm Tags: access to knowledge, commons, fair use, Google, Intellectual Property, ngram, open access, public domain Posted in: Symposium (Access to Knowledge) Print This Post 3 Comments
posted by Jonathan Lipson
Yesterday, I had the all-too-brief pleasure of sitting in on the first couple of talks at the Wisconsin Law Review’s Symposium, Intergenerational Equity and Intellectual Property, here in Madison.
Organized by my colleague, Shubha Ghosh (and starring, among others, CoOp-erator Deven Desai), the goal is important: How do we understand the intergenerational consequences of a legal regime—intellectual property—that is strongly determined by the present, but which has significant, but under-theorized, consequences for the future? Fights about extending the term of the Mickey Mouse copyright—or any set of long-haul rights—don’t just affect my kids, but potentially their kids, their kids’ kids, and so on. These are, in short, really fights about intergenerational equity.
I was only able to hear Michigan’s Peggy Radin (Property Longa, Vita Brevis) and Penn’s Matt Adler (Intergenerational Equity: Puzzles for Welfarists), but as expected, both provided awesome overviews of these sorts of problems. As Radin pointed out, intellectual property (knowledge and information law generally) always involves two types of generational problems: One is temporal (my parents, me, my kids, their kids, etc.); the other is technological (my students barely know from videotape; I will never beat my daughter at any computer game).
Adler explained that it is easy (and perhaps imprudent) to dismiss the utility of welfare economics as a tool to make these sorts of decisions. Certainly, we might say, Benthamite sums of utils could predict little for those not in existence (the future): what would their utility function be, really?
Yet, he observed, robust and subtle analytic models and conceptual frameworks are being developed by the Sens and Arrows of the world, and they may (if the future is bright) help develop more equitable and effective decision tools for matters with a long temporal reach.
Those who follow state politics may find this all a bit ironic. Wisconsin’s recent election was a decisive victory for Republicans, who captured both houses of the legislature and the Governor’s office on a message which may strain the state’s motto, “Forward.”
If Republicans keep their word, tax breaks for the rich and elderly will replace education and healthcare spending for the young and unborn; fossil fuel (old tech) subsidies will replace biofuel (new tech) development; and the University may have to fight to continue its path-breaking stem-cell research, certainly a way to kill both jobs in the present and medical miracles in the future. This may be good for baby boomers, but isn’t likely so hot for their grandkids.
Wisconsin’s liberals are, of course, despondent over their loss of power and position. Yet, forecasting and discounting long-term causation are among the things that make questions of intergenerational equity so interesting and difficult. I doubt Newt Gingrich thought in 1994 that the Contract with America would virtually assure Bill Clinton a second term, but today the former seems to have led to the latter. Likewise, it is certain that neither Jeremy Bentham nor Pete Townshend could have predicted the duration of their memetic contributions to today’s discussions about tomorrow. They probably just thought it was all rock and roll.
November 13, 2010 at 6:33 pm Tags: Intellectual Property, intergenerational equity, Republicans, Wisconsin Posted in: Conferences, Economic Analysis of Law, Intellectual Property, Jurisprudence, Law Rev (Wisconsin) Print This Post No Comments
On the Colloquy: The Credit Crisis, Refusal-to-Deal, Procreation & the Constitution, and Open Records vs. Death-Related Privacy Rights
posted by Northwestern University Law Review
This summer started off with a three part series from Professor Olufunmilayo B. Arewa looking at the credit crisis and possible changes that would focus on averting future market failures, rather than continuing to create regulations that only address past ones. Part I of Prof. Arewa’s looks at the failure of risk management within the financial industry. Part II analyzes the regulatory failures that contributed to the credit crisis as well as potential reforms. Part III concludes by addressing recent legislation and whether it will actually help solve these very real problems.
Next, Professors Alan Devlin and Michael Jacobs take on an issue at the “heart of a highly divisive, international debate over the proper application of antitrust laws” – what should be done when a dominant firm refuses to share its intellectual property, even at monopoly prices.
Professor Carter Dillard then discussed the circumstances in which it may be morally permissible, and possibly even legally permissible, for a state to intervene and prohibit procreation.
Rounding out the summer was Professor Clay Calvert’s article looking at journalists’ use of open record laws and death-related privacy rights. Calvert questions whether journalists have a responsibility beyond simply reporting dying words and graphic images. He concludes that, at the very least, journalists should listen to the impact their reporting has on surviving family members.
September 5, 2010 at 1:15 pm Tags: Antitrust, Constitutional Law, copyright, discrimination, financial crisis, free speech, Intellectual Property, Privacy, trademark Posted in: Antitrust, Bioethics, Civil Rights, Constitutional Law, Corporate Finance, First Amendment, Intellectual Property, Privacy, Securities, Securities Regulation Print This Post No Comments
posted by Gaia Bernstein
Earlier this week a district court in a dramatic decision invalidated BRCA 1/2 – two breast cancer gene patents held by Myriad Genetics. The Court based its decision on patent subject matter analysis holding that since the isolated DNA covered by Myriad’s patents is not markedly different from the native DNA as it exists in nature, it qualifies as a product of nature, which is not patentable subject matter.
No doubt, as commentators have noted (here and here), this decision if not overturned or limited on appeal could carry broad ramifications for the future gene patents. But, this decision signifies also a change in strategy in the efforts to restrict gene patents – a focus on the patient.
As I have written, most of the debates on gene patents addressed the way that gene patents affect genetic research – the concern that granting patents on the building blocks of genetic science will hinder the development of more complex innovations. Unsurprisingly, most academic proposals and legislative bills address the innovation problem. The effects on the patient until now took a back seat.
This lawsuit against Myriad signifies a change in that it finally places the patient and the administration of genetic testing at the center of the stage. Although the Court’s holding focuses on patent subject matter the court dedicates a significant part of the opinion to access to BRCA1/2 genetic testing. Myriad charges about $3,000 for testing an exorbitant amount compared to other genetic tests. Furthermore, Myriad does not allow other laboratories to conduct the testing – all samples have to be sent to its headquarters in Salt Lake City. The opinion tells the stories of women who were unable to test to find out whether they carry the BRCA1/2 genes because Myriad would not accept their insurance. It recounts the ordeals of women who could not get definitive answers through Myriad’s testing and were precluded from seeking testing elsewhere. It underscores that women were unable to get a second opinion of the test results because tests are conducted only by Myriad. It also discusses the efforts of doctors and laboratories who were willing and able to offer BRCA1/2 testing but were precluded by Myriad from conducting the testing.
posted by Jon Siegel
Twitter’s application for a trademark registration on the word “tweet” was recently rejected, which led to a discussion among some colleagues and myself as to whether the word is a generic term. The argument in favor is that the word “tweet” has become a common term, which has entered dictionaries and even the AP style guide, as the linked article shows.
A basic principle of of trademark law is that no one can trademark a “generic” term, which is to say, the common term for article or service being sold. Thus, no one could own the exclusive right to sell toothpaste under the name “toothpaste.” That would hardly be fair to competing sellers of toothpaste, and the generic term also doesn’t perform the basic function of a trademark, which is to tell consumers the source of the product, not what the product is.
Nonetheless, I would say that “tweet” is not generic. Yes, “tweet” has become a common term, but with what meaning? To me, “tweet” means, “a short message carried via the Twitter service.” It doesn’t mean, generically, “a short message,” or even “a short message carried via some social networking service.” It is specific to Twitter. I don’t think of the short messages I send to my Facebook friends as “tweets.”
This usage is confirmed by that eminently reliable source, Wikipedia, which defines “tweet” as “A micro-blog post on the Twitter social network site, or the act of posting on it.” And urbandictionary.com says that a “tweet” is “A post on Twitter, a real-time social messaging system.”
So I would say that “tweet” still performs a trademark’s source-indicating function. It tells you that the thing named is associated with Twitter specifically. Perhaps people will soon start referring to any short message as a “tweet,” but it hasn’t happened yet. So I say that “tweet” is not generic.
posted by Deven Desai
Just a quick note. I am fortunate to be in Hong Kong at The Age of Digital Convergence, An East-West Dialogue Law, Media, Technology. The Journalism and Media Studies Centre at The University of Hong Kong and Intellectual Property Law Center at Drake University Law School organized the event and the Faculty of Law at The University of Hong Kong and Technology & Entertainment Law Program at Vanderbilt University Law School co-sponsored.
The conference aims to address a range of questions:
What does it mean when people are born or have grown up digital? How do different forms of media interact with each other in an increasingly convergent environment? What type of legal, social and cultural challenges have arisen when people actively participate in the information age? Has the digital lifestyle paved the way for the development of new business models, social relationships and government regulation? Do we need to rethink some of our real-world assumptions when we talk about the Net Generation? Should traditional concepts, such as privacy, identity, free speech and journalism, be reconceptualized in cyberspace?
The panels include Digital Natives, Social Networks and the Virtual World; Content Control, Indecency and Pornography; Journalism in the Age of Convergence; New Media, Sociocultural Issues and Emerging Developments; Content Delivery, Multimedia Platforms and New Prosumers; Privacy, Identity and Brandjacking; Creativity 2.0, Technolegal Fixes and Copyright Reform; and Closing Address: “Hong Kong–Creative Capital in Asia”. The panelists have come from all over the world and several disciplines. Hearing so many different views and learning about some East Asian perspective on intellectual property and privacy has been quite stimulating. Last, I want to thank Peter Yu for inviting me to participate and as always being an excellent host.