Tagged: Intellectual Property

0

Supreme Court Gives Patent Law New Bite (Definiteness)

I want to thank Danielle Citron and the other folks at Concurring Opinions for inviting me to blog.  As Danielle mentioned in her introduction, I am a law professor at the University of Colorado Law School focused on technology and law.  (More info about me is here: http://harrysurden.com; Twitter: @Harry Surden).

Patent Law’s Definiteness Requirement Has New Bite

The Supreme Court may have shaken up patent law quite a bit with its recent opinion in the Nautilus v. Biosig case (June 2, 2014).

At issue was patent law’s “definiteness” requirement, which is related to patent boundaries. As I (and others) have argued, uncertainty about patent boundaries (due to vague, broad and ambiguous claim language), and lack of notice as to the bounds of patent rights, is a major problem in patent law.

I will briefly explain patent law’s definiteness requirement, and then how the Supreme Court’s new definiteness standard may prove to be a significant change in patent law. In short – many patent claims – particularly those with vague or ambiguous language – may now be vulnerable to invalidity attacks following the Supreme Court’s new standard.

Patent Claims: Words Describing Inventions

In order to understand “definiteness”, it’s important to start with some patent law basics.  Patent law gives the patent holder exclusive rights over inventions – the right to prevent others from making, selling, or using a patented invention.  How do we know what inventions are covered by a particular patent?  They are described in the patent claims. 

Notably, patent claims describe the inventions that they cover using (primarily) words.

For instance, in the Supreme Court case at issue, the patent holder – Biosig – patented an invention – a heart-rate monitor.  Their patent used the following claim language to delineate their invention :

I claim a heart rate monitor for use in association with exercise apparatus comprising…

live electrode

and a first common electrode mounted on said first half

 In spaced relationship with each other…”

Screen Shot 2014-06-06 at 9.32.30 AM

So basically, the invention claimed was the kind of heart rate monitor that you might find on a treadmill.   The portion of the claim above described one part of the overall invention – two electrodes separated by some amount of space.  Presumably the exercising person holds on to these electrodes as she exercises, and the device reads the heart rate.

( Note: only a small part of the patent claim is shown – the actual claim is much longer)

Patent Infringement: Comparing Words to Physical Products

So what is the relationship between the words of a patent claim and patent infringement?

In a typical patent infringement lawsuit, the patent holder alleges that the defendant is making or selling some product or process (here a product) that is covered by the language of a patent claim (the “accused product”).  To determine literal patent infringement, we compare the words of the patent claim to the defendant’s product, to see if the defendant’s product corresponds to what is delineated in the plaintiff’s patent claims.

For instance, in this case, Biosig alleged that Nautilus was selling a competing, infringing heart-rate monitor.  Literal patent infringement would be determined by comparing the words of Biosig’s patent claim (e.g. “a heart rate monitor with a live electrode…”) to a physical object –  the competing heart-rate monitor product that Nautilus was selling (e.g. does Nautilus’ heart rate monitor have a part that can be considered a “live electrode”)?

Literal patent infringement is determined by systematically marching through each element (or described part) in Biosig’s patent claim, and comparing it to Nautilus’s competing product. If Nautilus’ competing product has every one of the “elements” (or parts) listed in Biosig’s patent claim, then Nautilus’s product would literally infringe Biosig’s patent claim.

If patent infringement is found, a patent holder can receive damages or in some cases, use the power of the court  to prevent the competitor from selling the product through an injunction.

Patent Claims – A Delicate Balance with Words

Writing patent claims involves a delicate balance.  On the one hand, a patent claim must be written in broad enough language that such a patent claim will cover competitors’ future products.

Why?  Well, imagine that Biosig had written their patent claim narrowly.  This would mean that in place of the broad language actually used (e.g. “electrodes in a spaced relationship”), Biosig had instead described the particular characteristics of the heart-rate monitor product that Biosig sold.  For instance, if Biosig’s heart-rate monitor product had two electrodes that were located exactly 4 inches apart, Biosig could have written their patent claim with language saying, “We claim a heart rate monitor with two electrodes exactly 4 inches apart” rather than the general language they actually used, the two electrodes separated by a “spaced relationship”

However, had Biosig written such a narrow patent, it might not be commercially valuable.  Competing makers of heart rate monitors such as Nautilus could easily change their products to “invent around” the claim so as not to infringe. A competitor might be able to avoid literally infringing by creating a heart-rate monitor with electrodes that were 8 inches apart.  For literal infringement purposes, a device with electrodes 8 inches apart would not literally infringe a patent that claims electrodes “exactly 4 inches apart.”

From a patent holder’s perspective, it is not ideal to write a patent claim too narrowly, because for a patent to be valuable, it has to be broad enough to cover the future products of your competitors in such a way that they can’t easily “invent around” and avoid infringement.  A patent claim is only as valuable (trolls aside) as the products or processes that fall under the patent claim words.  If you have a patent, but its claims do not cover any actual products or processes in the world because it is written too narrowly, it will not be commercially valuable.

Thus, general or abstract words (like “spaced relationship”) are often beneficial for patent holders, because they are often linguistically flexible enough to cover more variations of competitors’ future products.

Patent Uncertainty – Bad for Competitors (and the Public)

By contrast, general, broad, or abstract claim words are often not good for competitors (or the public generally).  Patent claims delineate the boundaries or “metes-and-bounds” of patent legal rights  Other firms would like to know where their competitors’ patent rights begin and end.  This is so that they can estimate their risk of patent liability, know when to license, and in some cases, make products that avoid infringing their competitors’ patents.

However, when patent claim words are abstract, or highly uncertain, or have multiple plausible interpretations, firms cannot easily determine where their competitor’s patent rights end, and where they have the freedom to operate.  This can create a zone of uncertainty around research and development generally in certain areas of invention, perhaps reducing overall inventive activity for the public.

Read More

0

Tall Latte with a Double Shot of Tax Avoidance

StarbucksLogoIntellectual property has become a major tax-avoidance vehicle for multinationals. Front-page articles in the New York Times and Wall Street Journal have detailed how IP-heavy companies like Apple, Google, and Big Pharma play games with their IP to avoid taxes on a massive scale. For example, Apple uses IP-based tax-avoidance strategies to reduce its effective tax rate to approximately 8%, well below the statutory 35% corporate tax rate (and well below most middle-class Americans’ tax rates).

Two characteristics of IP make it the ideal tax-avoidance vehicle. First, the uniqueness of every piece of IP makes its fair market value extremely hard to establish, allowing taxpayers to choose whatever valuations result in the least tax. Second, unlike workers or physical assets like factories or stores, IP can easily be moved to tax havens via mere paperwork.

But Starbucks is a bricks-and-mortar retailer dependent upon physical presence in high-tax countries. It wouldn’t seem to be in a position to use these IP-based tax tricks. Yet in an excellent, eye-opening paper, Edward Kleinbard (USC) delves into the strategies that Starbucks uses to substantially reduce its worldwide tax burden. Most interestingly, Starbucks puts IP like trademarks, proprietary roasting methods, operational expertise, and store trade dress into low-tax jurisdictions. Kleinbard cogently observes that the ability of a bricks-and-mortar retailer like Starbucks to play such games demonstrates how deep the flaws run in current U.S. and international tax policy.

 

6

Proxy Patent Litigation

In the last decade or so, patent litigation in the United States has undergone enormous changes. Perhaps most profound is the rise in enforcement of patents held by people and entities who don’t make any products or otherwise participate in the marketplace. Some call these patent holders ‘non-practicing entities’ (NPEs), while others use the term ‘patent assertion entities’ (PAEs), and some pejoratively refer to some or all of these patent holders as ‘trolls.’ These outsiders come in many different flavors: individual inventors, universities, failed startups, and holding companies that own a patent or family of patents. 

This post is about a particular type of outsider that is relatively new: the mass patent aggregator. The mass patent aggregator owns or controls a significant number of patents – hundreds or even thousands – which it acquired from different sources, including from companies that manufacture products. These mass aggregators often seek to license their portfolios to large practicing entities for significant amounts of money, sometimes using infringement litigation as the vehicle. Aggregators often focus their portfolios on certain industries such as consumer electronics.

Mass aggregator patent litigation and ordinary patent litigation appear to differ in one important aspect. Mass aggregators sue on a few patents in their portfolio, which serve as proxies for the quality of their entire portfolio. The parties use the court’s views of the litigated patents to determine how to value the full patent portfolio. By litigating only a small subset of their portfolio, the aggregator and potential licensee avoid the expense of litigating all of the patents. But the court adjudicates the dispute completely oblivious to the proxy aspect of the litigation. Instead, the court handles it like every other case – by analyzing the merits of the various claims and defenses.

If the court understood the underlying dispute was litigation-by-proxy, would it (or could it) proceed any differently? I will discuss my thoughts on this question in another blog post. For now, I have a question: does proxy litigation occur in other areas of law?

0

Stanford Law Review Online: Anticipating Patentable Subject Matter

Stanford Law Review

The Stanford Law Review Online has just published an Essay by Dan L. Burk entitled Anticipating Patentable Subject Matter. Professor Burk argues that the fact that something might be found in nature should not necessarily preclude its patentability:

The Supreme Court has added to its upcoming docket Association for Molecular Pathology v. Myriad Genetics, Inc., to consider the question: “Are human genes patentable?” This question implicates patent law’s “products of nature” doctrine, which excludes from patentability naturally occurring materials. The Supreme Court has previously recognized that “anything under the sun that is made by man” falls within patentable subject matter, implying that things under the sun not made by man do not fall within patentable subject matter.

One of the recurring arguments for classifying genes as products of nature has been that these materials, even if created in the laboratory, could sometimes instead have been located by scouring the contents of human cells. But virtually the same argument has been advanced and rejected in another area of patent law: the novelty of patented inventions. The rule in that context has been that we reward the inventor who provides us with access to the materials, even if in hindsight they might have already been present in the prior art. As a matter of doctrine and policy, the rule for patentable subject matter should be the same.

He concludes:

“I can find the invention somewhere in nature once an inventor has shown it to me” is clearly the wrong standard for a patent system that hopes to promote progress in the useful arts. The fact that a version of the invention may have previously existed, unrecognized, unavailable, and unappreciated, should be irrelevant to patentability under either novelty or subject matter. The proper question is: did the inventor make available to humankind something we didn’t have available before? On this standard, the reverse transcribed molecules created by the inventors in Myriad are clearly patentable subject matter.

Read the full article, Anticipating Patentable Subject Matter at the Stanford Law Review Online.

1

Stanford Law Review Online: In Memoriam Best Mode

Stanford Law Review

The Stanford Law Review Online has just published an Essay by Lee Petherbridge and Jason Rantanen entitled In Memoriam Best Mode. Professors Petherbridge and Rantanen discuss an overlooked element of the Leahy-Smith America Invents Act—the de facto elimination of the requirement that inventors include a description of the “best mode” of practicing their inventions in patent applications:

On September 16, 2011, President Obama signed into law the Leahy-Smith America Invents Act. It embodies the most substantial legislative overhaul of patent law and practice in more than half a century. Commentators have begun the sizable task of unearthing and calling attention to the many effects the Act may have on the American and international innovation communities. Debates have sprung up over the consequences to inventors small and large, and commentators have obsessed over the Act’s so-called “first-to-file” and “post-grant review” provisions. Lost in the frenzy to understand the consequences of the new Act has been the demise of patent law’s “best mode” requirement.

The purpose of this short essay is to draw attention to a benefit the best mode requirement provides—or perhaps “provided” would be a better word—to the patent system that has not been the subject of previous discussion. The benefit we describe directly challenges the conventional attitude that best mode is divorced from the realities of the patent system and the commercial marketplace. Our analysis suggests that patent reformers may have been much too quick to dismiss best mode as a largely irrelevant, and mostly problematic, doctrine.

They conclude:

Even while best mode can produce patent disclosures that have broader prior art effect, it simultaneously can cooperate with the doctrines of claim construction and written description to produce patents with claims that may be construed as having a narrower scope. Detailed descriptions of especially effective embodiments of an invention can have the effect of introducing elements that courts often find, either through the application of claim construction or written description doctrines, to be essential elements of an invention. Competitors that do not employ such essential elements are not infringers. Thus, best mode can further help establish and maintain the public domain by limiting the amount of information restricted by patents, thereby increasing the distance between bubbles of patent-restricted information.

Read the full article, In Memoriam Best Mode by Lee Petherbridge and Jason Rantanen, at the Stanford Law Review Online.

6

Better Stories, Better Laws, Better Culture

I first happened across Julie Cohen’s work around two years ago, when I started researching privacy concerns related to Amazon.com’s e-reading device, Kindle.  Law professor Jessica Littman and free software doyen Richard Stallman had both talked about a “right to read,” but never was this concept placed on so sure a legal footing as it was in Cohen’s essay from 1996, “A Right to Read Anonymously.”  Her piece helped me to understand the illiberal tendencies of Kindle and other leading commercial e-readers, which are (and I’m pleased more people are coming to understand this) data gatherers as much as they are appliances for delivering and consuming texts of various kinds.

Truth be told, while my engagement with Cohen’s “Right to Read Anonymously” essay proved productive for this particular project, it also provoked a broader philosophical crisis in my work.  The move into rights discourse was a major departure — a ticket, if you will, into the world of liberal political and legal theory.  Many there welcomed me with open arms, despite the awkwardness with which I shouldered an unfamiliar brand of baggage trademarked under the name, “Possessive Individualism.”  One good soul did manage to ask about the implications of my venturing forth into a notion of selfhood vested in the concept of private property.  I couldn’t muster much of an answer beyond suggesting, sheepishly, that it was something I needed to work through.

It’s difficult and even problematic to divine back-story based on a single text.  Still, having read Cohen’s latest, Configuring the Networked Self, I suspect that she may have undergone a crisis not unlike my own.  The sixteen years spanning “A Right to Read Anonymously” and Configuring the Networked Self are enormous.  I mean that less in terms of the time frame (during which Cohen was highly productive, let’s be clear) than in terms of the refinement in the thinking.  Between 1996 and 2012 you see the emergence of a confident, postliberal thinker.  This is someone who, confronted with the complexities of everyday life in highly technologized societies, now sees possessive individualism for what it is: a reductive management strategy, one whose conception of society seems more appropriate to describing life on a preschool playground than it does to forms of interaction mediated by the likes of Facebook, Google, Twitter, Apple, and Amazon.

In this Configuring the Networked Self is an extraordinary work of synthesis, drawing together a diverse array of fields and literatures: legal studies in its many guises, especially its critical variants; science and technology studies; human and computer interaction; phenomenology; post-structuralist philosophy; anthropology; American studies; and surely more.  More to the point it’s an unusually generous example of scholarly work, given Cohen’s ability to see in and draw out of this material its very best contributions.

I’m tempted to characterize the book as a work of cultural studies given the central role the categories culture and everyday life play in the text, although I’m not sure Cohen would have chosen that identification herself.  I say this not only because of the book’s serious challenges to liberalism, but also because of the sophisticated way in which Cohen situates the cultural realm.

This is more than just a way of saying she takes culture seriously.  Many legal scholars have taken culture seriously, especially those interested in questions of privacy and intellectual property, which are two of Cohen’s foremost concerns.  What sets Configuring the Networked Self apart from the vast majority of culturally inflected legal scholarship is her unwillingness to take for granted the definition — you might even say, “being” — of the category, culture.  Consider this passage, for example, where she discusses Lawrence Lessig’s pathbreaking book Code and Other Laws of Cyberspace:

The four-part Code framework…cannot take us where we need to go.  An account of regulation emerging from the Newtonian interaction of code, law, market, and norms [i.e., culture] is far too simple regarding both instrumentalities and effects.  The architectures of control now coalescing around issues of copyright and security signal systemic realignments in the ordering of vast sectors of activity both inside and outside markets, in response to asserted needs that are both economic and societal.  (chap. 7, p. 24)

What Cohen is asking us to do here is to see culture not as a domain distinct from the legal, or the technological, or the economic, which is to say, something to be acted upon (regulated) by one or more of these adjacent spheres.  This liberal-instrumental (“Netwonian”) view may have been appropriate in an earlier historical moment, but not today.  Instead, she is urging us to see how these categories are increasingly embedded in one another and how, then, the boundaries separating the one from the other have grown increasingly diffuse and therefore difficult to manage.

The implications of this view are compelling, especially where law and culture are concerned.  The psychologist Abraham Maslow once said, “it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”  In the old, liberal view, one wielded the law in precisely this way — as a blunt instrument.  Cohen, for her part, still appreciates how the law’s “resolute pragmatism” offers an antidote to despair (chap. 1, p. 20), but her analysis of the “ordinary routines and rhythms of everyday practice” in an around networked culture leads her to a subtler conclusion (chap. 1, p. 21).  She writes: “practice does not need to wait for an official version of culture to lead the way….We need stories that remind people how meaning emerges from the uncontrolled and unexpected — stories that highlight the importance of cultural play and of spaces and contexts within which play occurs” (chap. 10, p. 1).

It’s not enough, then, to regulate with a delicate hand and then “punt to culture,” as one attorney memorably put it an anthropological study of the free software movement.  Instead, Cohen seems to be suggesting that we treat legal discourse itself as a form of storytelling, one akin to poetry, prose, or any number of other types of everyday cultural practice.  Important though they may be, law and jurisprudence are but one means for narrating a society, or for arriving at its self-understandings and range of acceptable behaviors.

Indeed, we’re only as good as the stories we tell ourselves.  This much Jaron Lanier, one of the participants in this week’s symposium, suggested in his recent book, You Are Not a Gadget.  There he showed how the metaphorics of desktops and filing, generative though they may be, have nonetheless limited the imaginativeness of computer interface design.  We deserve computers that are both functionally richer and experientially more robust, he insists, and to achieve that we need to start telling more sophisticated stories about the relationship of digital technologies and the human body.  Lousy stories, in short, make for lousy technologies.

Cohen arrives at an analogous conclusion.  Liberalism, generative though it may be, has nonetheless limited our ability to conceive of the relationships among law, culture, technology, and markets.  They are all in one another and of one another.  And until we can figure out how to narrate that complexity, we’ll be at a loss to know how to live ethically, or at the very least mindfully, in an a densely interconnected and information rich world.  Lousy stories make for lousy laws and ultimately, then, for lousy understandings of culture.

The purposes of Configuring the Networked Self are many, no doubt.  For those of us working in the twilight zone of law, culture, and technology, it is a touchstone for how to navigate postliberal life with greater grasp — intellectually, experientially, and argumentatively.  It is, in other words, an important first chapter in a better story about ordinary life in a high-tech world.

0

Thoughts on Ammori’s Free Speech Architecture and the Golan decision

Thank you to Marvin for an excellent article to read and discuss, and thank you Concurring Opinions for providing a public forum for our discussion.

In the article, the critical approach that Marvin takes to challenge the “standard” model of the First Amendment is really interesting. He claims that the standard model of the First Amendment focuses on preserving speakers’ freedom by restricting government action and leaves any affirmative obligations for government to sustain open public spaces to a patchwork of exceptions lacking any coherent theory or principles. A significant consequence of this model is that open public spaces for speech—I want to substitute “infrastructure” for “spaces”–are marginalized and taken for granted. My forthcoming book—Infrastructure: The Social Value of Shared Resources–explains why such marginalization occurs in this and various other contexts and develops a theory to support the exceptions. But I’ll leave those thoughts aside for now and perhaps explore them in another post. And I’ll leave it to the First Amendment scholars to debate Marvin’s claim about what is the standard model for the First Amendment.

Instead, I would like to point out how a similar (maybe the same) problem can be seen in the Supreme Court’s most recent copyright opinion. In Golan v. Holder , Justice Ginsburg marginalizes the public domain in a startlingly fashion. Since it is a copyright case, the “model” is flipped around: government is empowered to grant exclusive rights (and restrict some speakers’ freedom) and any restrictions on the government’s power to do so is limited to narrow exceptions, i.e., the idea-expression distinction and fair use. A central argument in the case was that the public domain itself is another restriction. The public domain is not expressly mentioned in the IP Clause of the Constitution, but arguably, it is implicit throughout (Progress in Science and the Useful Arts, Limited Times). Besides, the public domain is inescapably part of the reality that we stand on the shoulders of generations of giants. Most copyright scholars believed that Congress could not grant copyright to works in the public domain (and probably thought that the issue raised in the case – involving restoration for foreign works that had not been granted copyright protection in the U.S — presented an exceptional situation that might be dealt with as such). But the Court declined to rule narrowly and firmly rejected the argument that “the Constitution renders the public domain largely untouchable by Congress.” In the end, Congress appears to have incredibly broad latitude to exercise its power, limited only by the need to preserve the “traditional contours.”

Of course, it is much more troublesome that the Supreme Court (rather than scholars interpreting Supreme Court cases) has adopted a flawed conceptual model that marginalizes basic public infrastructure. We’re stuck with it.

1

Stanford Law Review Online: Don’t Break the Internet

Stanford Law Review

The Stanford Law Review Online has just published a piece by Mark Lemley, David S. Levine, and David G. Post on the PROTECT IP Act and the Stop Online Piracy Act. In Don’t Break the Internet, they argue that the two bills — intended to counter online copyright and trademark infringement — “share an underlying approach and an enforcement philosophy that pose grave constitutional problems and that could have potentially disastrous consequences for the stability and security of the Internet’s addressing system, for the principle of interconnectivity that has helped drive the Internet’s extraordinary growth, and for free expression.”

They write:

These bills, and the enforcement philosophy that underlies them, represent a dramatic retreat from this country’s tradition of leadership in supporting the free exchange of information and ideas on the Internet. At a time when many foreign governments have dramatically stepped up their efforts to censor Internet communications, these bills would incorporate into U.S. law a principle more closely associated with those repressive regimes: a right to insist on the removal of content from the global Internet, regardless of where it may have originated or be located, in service of the exigencies of domestic law.

Read the full article, Don’t Break the Internet by Mark Lemley, David S. Levine, and David G. Post, at the Stanford Law Review Online.

Note: Corrected typo in first paragraph.

3

The Age of Intellectual Property?

Are we in the Age of Intellectual Property?

It’s become a truism in IP scholarship to introduce a discussion by acknowledging the remarkable recent rise in popular, scholarly, and political interest in our field. Thus readers will recognize a familiar sentiment in the opening line of Amy Kapczynski and Gaëlle Krikorian’s new book:

A decade or two ago, the words “intellectual property” were rarely heard in polite company, much less in street demonstrations or on college campuses. Today, this once technical concept has become a conceptual battlefield.

Only recently, however, has it become possible to put this anecdotal consensus to empirical test.

In December 2010, Google launched ngrams, a simple tool for searching its vast repository of digitized books and charting the frequency of specific terms over time. (It controls for the fact that there are many more books being published today.)

If you haven’t already played around with this tool to explore your own topics of interest, you should. While you’re at it, take a stab at explaining why writing on the Supreme Court rose steadily until approximately 1935 and has dropped just as steadily ever since!

Back to our topic, though. What does this data reveal about the prominence of intellectual property in published discourse?

I generated two graphs, both charting the terms “intellectual property,” “copyright,” “patent,” and “trademark.” First, the longview:
Read More

0

Stalking About Your Generation

Yesterday, I had the all-too-brief pleasure of sitting in on the first couple of talks at the Wisconsin Law Review’s Symposium, Intergenerational Equity and Intellectual Property, here in Madison.

Organized by my colleague, Shubha Ghosh (and starring, among others, CoOp-erator Deven Desai), the goal is important:  How do we understand the intergenerational consequences of a legal regime—intellectual property—that is strongly determined by the present, but which has significant, but under-theorized, consequences for the future?  Fights about extending the term of the Mickey Mouse copyright—or any set of long-haul rights—don’t just affect my kids, but potentially their kids, their kids’ kids, and so on.  These are, in short, really fights about intergenerational equity.

I was only able to hear Michigan’s Peggy Radin (Property Longa, Vita Brevis) and Penn’s Matt Adler (Intergenerational Equity: Puzzles for Welfarists), but as expected, both provided awesome overviews of these sorts of problems.  As Radin pointed out, intellectual property (knowledge and information law generally) always involves two types of generational problems: One is temporal (my parents, me, my kids, their kids, etc.); the other is technological (my students barely know from videotape; I will never beat my daughter at any computer game).

Adler explained that it is easy (and perhaps imprudent) to dismiss the utility of welfare economics as a tool to make these sorts of decisions.  Certainly, we might say, Benthamite sums of utils could predict little for those not in existence (the future):  what would their utility function be, really?

Hope I die before you get old

Yet, he observed, robust and subtle analytic models and conceptual frameworks are being developed by the Sens and Arrows of the world, and they may (if the future is bright) help develop more equitable and effective decision tools for matters with a long temporal reach.

Those who follow state politics may find this all a bit ironic. Wisconsin’s recent election was a decisive victory for Republicans, who captured both houses of the legislature and the Governor’s office on a message which may strain the state’s motto, “Forward.”

If Republicans keep their word, tax breaks for the rich and elderly will replace education and healthcare spending for the young and unborn; fossil fuel (old tech) subsidies will replace biofuel (new tech) development; and the University may have to fight to continue its path-breaking stem-cell research, certainly a way to kill both jobs in the present and medical miracles in the future. This may be good for baby boomers, but isn’t likely so hot for their grandkids.

Hope you die before I get old

Wisconsin’s liberals are, of course, despondent over their loss of power and position.  Yet, forecasting and discounting long-term causation are among the things that make questions of intergenerational equity  so interesting and difficult.  I doubt Newt Gingrich thought in 1994 that the Contract with America would virtually assure Bill Clinton a second term, but today the former seems to have led to the latter.   Likewise, it is certain that neither Jeremy Bentham nor Pete Townshend could have predicted the duration of their memetic contributions to today’s discussions about tomorrow.  They probably just thought it was all rock and roll.