Category: Technology

Announcing the We Robot 2015 Call for Papers

CommonsRobotHere is the We Robot call for papers, via Ryan Calo:

We Robot invites submissions for the fourth annual robotics law and policy conference—We Robot 2015—to be held in Seattle, Washington on April 10-11, 2015 at the University of Washington School of Law. We Robot has been hosted twice at the University of Miami School of Law and once at Stanford Law School. The conference web site is at http://werobot2015.org.

We Robot 2015 seeks contributions by academics, practitioners, and others in the form of scholarly papers or demonstrations of technology or other projects. We Robot fosters conversations between the people designing, building, and deploying robots, and the people who design or influence the legal and social structures in which robots will operate. We particularly encourage contributions resulting from interdisciplinary collaborations, such as those between legal, ethical, or policy scholars and roboticists.

This conference will build on existing scholarship that explores how the increasing sophistication and autonomous decision-making capabilities of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues. We are particularly interested this year in “solutions,” i.e., projects with a normative or practical thesis aimed at helping to resolve issues around contemporary and anticipated robotic applications.
Read More

Interview on The Black Box Society

BBSBalkinization just published an interview on my forthcoming book, The Black Box Society. Law profs may be interested in our dialogue on methodology—particularly, what the unique role of the legal scholar is in the midst of increasing academic specialization. I’ve tried to surface several strands of inspiration for the book.

How We’ll Know the Wikimedia Foundation is Serious About a Right to Remember

The “right to be forgotten” ruling in Europe has provoked a firestorm of protest from internet behemoths and some civil libertarians.* Few seem very familiar with classic privacy laws that govern automated data systems. Characteristic rhetoric comes from the Wikimedia Foundation:

The foundation which operates Wikipedia has issued new criticism of the “right to be forgotten” ruling, calling it “unforgivable censorship.” Speaking at the announcement of the Wikimedia Foundation’s first-ever transparency report in London, Wikipedia founder Jimmy Wales said the public had the “right to remember”.

I’m skeptical of this line of reasoning. But let’s take it at face value for now. How far should the right to remember extend? Consider the importance of automated ranking and rating systems in daily life: in contexts ranging from credit scores to terrorism risk assessments to Google search rankings. Do we have a “right to remember” all of these-—to, say, fully review the record of automated processing years (or even decades) after it happens?

If the Wikimedia Foundation is serious about advocating a right to remember, it will apply the right to the key internet companies organizing online life for us. I’m not saying “open up all the algorithms now”—-I respect the commercial rationale for trade secrecy. But years or decades after the key decisions are made, the value of the algorithms fades. Data involved could be anonymized. And just as Asssange’s and Snowden’s revelations have been filtered through trusted intermediaries to protect vital interests, so too could an archive of Google or Facebook or Amazon ranking and rating decisions be limited to qualified researchers or journalists. Surely public knowledge about how exactly Google ranked and annotated Holocaust denial sites is at least as important as the right of a search engine to, say, distribute hacked medical records or credit card numbers.

So here’s my invitation to Lila Tretikov, Jimmy Wales, and Geoff Brigham: join me in calling for Google to commit to releasing a record of its decisions and data processing to an archive run by a third party, so future historians can understand how one of the most important companies in the world made decisions about how it ordered information. This is simply a bid to assure the preservation of (and access to) critical parts of our cultural, political, and economic history. Indeed, one of the first items I’d like to explore is exactly how Wikipedia itself was ranked so highly by Google at critical points in its history. Historians of Wikipedia deserve to know details about that part of its story. Don’t they have a right to remember?

*For more background, please note: we’ve recently hosted several excellent posts on the European Court of Justice’s interpretation of relevant directives. Though often called a “right to be forgotten,” the ruling in the Google Spain case might better be characterized as the application of due process, privacy, and anti-discrimination norms to automated data processing.

Facebook’s Model Users

DontAnthropomorphizePeopleFacebook’s recent pscyhology experiment has raised difficult questions about the ethical standards of data-driven companies, and the universities that collaborate with them. We are still learning exactly who did what before publication. Some are wisely calling for a “People’s Terms of Service” agreement to curb further abuses. Others are more focused on the responsibility to protect research subjects. As Jack Balkin has suggested, we need these massive internet platforms to act as fiduciaries.

The experiment fiasco is just the latest in a long history of ethically troubling decisions at that firm, and several others like it. And the time is long past for serious, international action to impose some basic ethical limits on the business practices these behemoths pursue.

Unfortunately, many in Silicon Valley still barely get what the fuss is about. For them, A/B testing is simply a way of life. Using it to make people feel better or worse is a far cry from, say, manipulating video poker machines to squeeze a few extra dollars out of desperate consumers. “Casino owners do that all the time!”, one can almost hear them rejoin.

Yet there are some revealing similarities between casinos and major internet platforms. Consider this analogy from Rob Horning:

Social media platforms are engineered to be sticky — that is, addictive, as Alexis Madrigal details in [a] post about the “machine zone.” . . . Like video slots, which incite extended periods of “time-on-machine” to assure “continuous gaming productivity” (i.e. money extraction from players), social-media sites are designed to maximize time-on-site, to make their users more valuable to advertisers (Instagram, incidentally, is adding advertising) and to ratchet up user productivity in the form of data sharing and processing that social-media sites reserve the rights to.
 

That’s one reason we get headlines like “Teens Can’t Stop Using Facebook Even Though They Hate It.” There are sociobiological routes to conditioning action. The platforms are constantly shaping us, based on sophisticated psychological profiles.

For Facebook to continue to meet Wall Street’s demands for growth, its user base must grow and/or individual users must become more “productive.” Predictive analytics demands standardization: forecastable estimates of revenue-per-user. The more a person clicks on ads and buys products, the better. Secondarily, the more a person draws other potential ad-clickers in–via clicked-on content, catalyzing discussions, crying for help, whatever–the more valuable they become to the platform. The “model users” gain visibility, subtly instructing by example how to act on the network. They’ll probably never attain the notoriety of a Lei Feng, but the Republic of Facebookistan gladly pays them the currency of attention, as long as the investment pays off for top managers and shareholders.

As more people understand the implications of enjoying Facebook “for free“–i.e., that they are the product of the service–they also see that its real paying customers are advertisers. As Katherine Hayles has stated, the critical question here is: “will ubiquitous computing be coopted as a stalking horse for predatory capitalism, or can we seize the opportunity” to deploy more emancipatory uses of it?  I have expressed faith in the latter possibility, but Facebook continually validates Julie Cohen’s critique of a surveillance-innovation complex.

A More Nuanced View of Legal Automation

A Guardian writer has updated Farhad Manjoo’s classic report, “Will a Robot Steal Your Job?” Of course, lawyers are in the crosshairs. As Julius Stone noted in The Legal System and Lawyers’ Reasoning, scholars have addressed the automation of legal processes since at least the 1960s. Al Gore now says that a “new algorithm . . . makes it possible for one first year lawyer to do the same amount of legal research that used to require 500.”* But when one actually reads the studies trumpeted by the prophets of disruption, a more nuanced perspective emerges.

Let’s start with the experts cited first in the article:

Oxford academics Carl Benedikt Frey and Michael A Osborne have predicted computerisation could make nearly half of jobs redundant within 10 to 20 years. Office work and service roles, they wrote, were particularly at risk. But almost nothing is impervious to automation.

The idea of “computing” a legal obligation may seem strange at the outset, but we already enjoy—-or endure-—it daily. For example, a DVD may only be licensed for play in the US and Europe, and then be “coded” so it can only play in those regions and not others. Were a human playing the DVD for you, he might demand a copy of the DVD’s terms of use and receipt, to see if it was authorized for playing in a given area. Computers need such a term translated into a language they can “understand.” More precisely, the legal terms embedded in the DVD must lead to predictable reactions from the hardware that encounters them. From Lessig to Virilio, the lesson is clear: “architectural regimes become computational, and vice versa.”

So certainly, to the extent lawyers are presently doing rather simple tasks, computation can replace them. But Frey & Osborne also identify barriers to successful automation:

1. Perception and manipulation tasks. Robots are still unable to match the depth and breadth of human perception.
2. Creative intelligence tasks. The psychological processes underlying human creativity are difficult to specify.
3. Social intelligence tasks. Human social intelligence is important in a wide range of work tasks, such as those involving negotiation, persuasion and care. (26)

Frey & Osborne only explicitly discuss legal research and document review (for example, identification and isolation among mass document collections) as easily automatable. They concede that “the computerisation of legal research will complement the work of lawyers” (17). They acknowledge that “for the work of lawyers to be fully automated, engineering bottlenecks to creative and social intelligence will need to be overcome.” In the end, they actually categorize “legal” careers as having a “low risk” of “computerization” (37).

The View from AI & Labor Economics

Those familiar with the smarter voices on this topic, like our guest blogger Harry Surden, would not be surprised. There is a world of difference between computation as substitution for attorneys, and computation as complement. The latter increases lawyers’ private income and (if properly deployed) contribution to society. That’s one reason I helped devise the course Health Data and Advocacy at Seton Hall (co-taught with a statistician and data visualization expert), and why I continue to teach (and research) the law of electronic health records in my seminar Health Information, Privacy, and Innovation, now that I’m at Maryland. As Surden observes, “many of the tasks performed by attorneys do appear to require the type of higher order intellectual skills that are beyond the capability of current techniques.” But they can be complemented by an awareness of rapid advances in software, apps, and data analysis.
Read More

0

Aereo and the Spirit of Technology Neutrality

aereo_logoAereo is a broadcast re-transmitter. It leases to subscribers access to an antenna that captures over-the-air television, copies and digitizes the signal, and then sends it into the subscriber’s home, on a one-to-one basis, in real time or at the subscriber’s later desire. Aereo was poised to revolutionize the cable business—or hasten its collapse.

At least, it was.

Wednesday the Supreme Court unequivocally held that Aereo infringes copyright law, per Section 106(4) (the Transmit Clause). Aereo’s main backer, Barry Diller, quickly waved the white flag. Aereo is done—and it’s unclear what exactly Justice Breyer’s majority opinion portends for other technologies, despite the majority’s “believ[ing]” that the decision will not harm non-cable-like systems.

As James Grimmelmann succinctly noted amid a flurry of thoughtful tweets, “aereo resolves but it does not clarify.” And that might be an understatement. Eric Goldman notes four unanswered questions. (Amazingly, the majority opinion does not even engage Cablevision.) I’d add to that list the still incredibly vague line demarcating a public performance and the broader issue of technology neutrality in copyright law. (More on technology neutrality in a moment.)

The Court’s opinion relied heavily upon legislative history and, in particular, Congress’s abrogation of two earlier Supreme Court decisions on cable re-transmitters, Fortnightly Corp. v. United Artists Television and Teleprompter Corp. v. CBS. The Aereo Court limited discussion entirely to “cable-like” systems, punted on technologically similar non-cable-like systems, and left a big question about the dividing line.

Overall, the Court came off sounding blind to the technological realities of 2014—in stark contrast to its relatively technology-savvy decision in Riley v. California. (Dan’s take on Riley.)

Margot Kaminski has an excellent post for The New Republic addressing the varying treatment of cloud computing in Aereo and Riley, noting how cloud concerns were waved off in Aereo but factored into the Court ruling that the government normally must get a warrant to search an arrestee’s cell phone. The question, Margot asks, is why the different treatment?

The simplest answer would be that the Court was dealing with two different legal regimes: Constitutional privacy law versus statutory copyright. But at the heart of both decisions, the Court was asked to decide whether an old rule applied to a new technology. In one case, the Court was hesitant, tentative, and deferential to the past legal model. And in the other, the Court was unafraid to adjust the legal system for the disruptive technology of the future.

I’m a fan of simplicity, and I think it is particularly helpful in answering this question.

The Fourth Amendment is dynamic. As Orin Kerr has explained: “When new tools and new practices threaten to expand or contract police power in a significant way, courts adjust the level of Fourth Amendment protection to try to restore the prior equilibrium.” The 1976 Copyright Act is not. And by design.

With the 1976 Copyright Act, Congress adopted the principle of “technology neutrality” for copyrightable subject matter and exclusive rights—to “avoid the artificial and largely unjustifiable distinctions” that previously led to unlicensed exploitation of copyrighted works in an uncovered technological medium.  Rather, the 1976 Act was written to apply to known and unknown technologies.

Read More

Disruption: A Tarnished Brand

I’ve been hearing for years that law needs to be “disrupted.” “Legal rebels” and “reinventors” of law may want to take a look at Jill Lepore’s devastating account of Clay Christensen’s development of that buzzword. Lepore surfaces the ideology behind it, and suggests some shoddy research:

Christensen’s sources are often dubious and his logic questionable. His single citation for his investigation of the “disruptive transition from mechanical to electronic motor controls,” in which he identifies the Allen-Bradley Company as triumphing over four rivals, is a book called “The Bradley Legacy,” an account published by a foundation established by the company’s founders. This is akin to calling an actor the greatest talent in a generation after interviewing his publicist.

Critiques of Christensen’s forays into health and education are common, but Lepore takes the battle to his home territory of manufacturing, debunking “success stories” trumpeted by Christensen. She also exposes the continuing health of firms the Christensenites deemed doomed. For Lepore, disruption is less a scientific theory of management than a thin ideological veneer for pushing short-sighted, immature, and venal business models onto startups:

They are told that they should be reckless and ruthless. Their investors . . . tell them that the world is a terrifying place, moving at a devastating pace. “Today I run a venture capital firm and back the next generation of innovators who are, as I was throughout my earlier career, dead-focused on eating your lunch,” [one] writes. His job appears to be to convince a generation of people who want to do good and do well to learn, instead, remorselessness. Forget rules, obligations, your conscience, loyalty, a sense of the commonweal. . . . Don’t look back. Never pause. Disrupt or be disrupted.

In other words, disruption is a slick rebranding of the B-School Machiavellianism that brought us “systemic deregulation and financialization.” If you’re wondering why many top business scholars went from “higher aims to hired hands,” Lepore’s essay is a great place to start.
Read More

5

Tesla encourages free use of its patents—but will that protect users from liability?

Tesla Motors made big news yesterday with an open letter titled, “All Our Patent Are Belong to You.”

The gist of the letter was that Tesla Motors had decided that, in the interest of growing the market for electric vehicles and in the spirit of open source, it would not enforce its patents against “good faith” users. The key language was at the end of the second paragraph:

Tesla will not initiate patent lawsuits against anyone who, in good faith, wants to use our technology.

Tesla made clear it was not abandoning its patents, nor did it intend to stop acquiring new patents. Rather, it just wanted clear “intellectual property landmines” that it decided were endangering the “path to the creation of compelling electric vehicles.”

The announcement, made on the company’s website, immediately attracted laudatory media attention. (International Business Times, Los Angeles Times, San Jose Mercury News, Wall Street Journal, etc.) As one commentator for Forbes wrote:

[H]anding out patents to the world is smarter still when you think how resource-sapping the process is. Engineers want to build not fill out paperwork for nit-picking lawyers. Why bog them down with endless red tape form-filling only to end up having to build an expensive legal department to have to defend patents that would likely be got around anyway?

Patents are meant to slow competition but they also slow innovation. In an era when you can invent faster than you can patent, why not keep ahead by inventing?

That’s a pretty concise summary of the general response: Patents are bad, Tesla is good, and all friction in technological innovation would be solved if others followed Tesla’s lead.

Setting aside a pretty loaded normative debate, I had a practical concern. Just how legally enforceable would Tesla’s declaration be? That is, if a technologist practiced one of Tesla’s patents, would they really be free from liability?

The answer isn’t clear. (At least, it wasn’t to a number of us on Twitter yesterday.) Certainly, Tesla could enter into a gratis licensing arrangement with every interested party; a prudent GC should demand that Tesla do so, but it’s unlikely Tesla would want to invest the time and money. In a nod to the vagueness of Telsa’s announcement, CEO Elon Musk also told Wired that “the company is open to making simple agreements with companies that are worried about what using patents in ‘good faith’ really means.”

But assuming Tesla offers nothing more than a public promise not to sue “good faith” users, this announcement may be of little social benefit. Worse, it seems to me that such public promises could provide a new vehicle for trolling.

Sure, Tesla may be estopped from enforcing its patents—though estoppel requires reasonable reliance and this announcement is so vague that it’s difficult to imagine the reliance that would be reasonable—and Tesla isn’t in the patent trolling business anyway. (Sorry, patent-assertion-entity business). But what if Tesla sold its patents or went bankrupt. Could a third party not enforce the patents? If it could, patents promised to be open source would seem a rich market for PAEs.

Tesla is not to first to pledge its patents as open source. In fact, as Clark Asay pointed out, IBM has already been accused of reneging the promise. (See: “IBM now appears to be claiming the right to nullify the 2005 pledge at its sole discretion, rendering it a meaningless confidence trick.”) The questions raised by the Tesla announcement are, thus, not new. And, given enough time, courts will have to answer them.

2

Computable Contracts Explained – Part 1

Computable Contracts Explained – Part 1

I had the occasion to teach “Computable Contracts” to the Stanford Class on Legal Informatics recently.  Although I have written about computable contracts here, I thought I’d explain the concept in a more accessible form.

I. Overview: What is a Computable Contract?

What is a Computable Contract?   In brief, a computable contract is a contract that a computer can “understand.” In some instances, computable contracting enables a computer to automatically assess whether the terms of a contract have been met.

How can computers understand contracts?  Here is the short answer (a more in-depth explanation appears below).  First, the concept of a computer “understanding” a contract is largely a metaphor.   The computer is not understanding the contract at the same deep conceptual or symbolic level as a literate person, but in a more limited sense.  Contracting parties express their contract in the language of computers – data – which allows the computer to reliably identify the contract components and subjects.  The parties also provide the computer with a series of rules that allow the computer to react in a sensible way that is consistent with the underlying meaning of the contractual promises.

Aren’t contracts complex, abstract, and executed in environments of legal and factual uncertainty?  Some are, but some aren’t. The short answer here is that the contracts that are made computable don’t involve the abstract, difficult or relatively uncertain legal topics that tend to occupy lawyers.  Rather (for the moment at least), computers are typically given contract terms and conditions with relatively well-defined subjects and determinable criteria that tend not to involve significant legal or factual uncertainty in the average case.

For this reason, there are limits to computable contracts: only small subsets of contracting scenarios can be made computable.  However, it turns out that these contexts are economically significant. Not all contracts can be made computable, but importantly, some can.

Importance of Computable Contracts 

There are a few reasons to pay attention to computable contracts.   For one, they have been quietly appearing in many industries, from finance to e-commerce.  Over the past 10 years, for instance, many modern contracts to purchase financial instruments (e.g. equities or derivatives) have transformed from traditional contracts, to electronic, “data-oriented” computable contracts.   Were you to examine a typical contract to purchase a standardized financial instrument these days, you would find that it looked more like a computer database record (i.e. computer data), and less like lawyerly writing in a Microsoft Word document.

Computable contracts also have new properties that traditional, English-language, paper contracts do not have.  I will describe this in more depth in the next post, but in short, computable contracts can serve as inputs to other computer systems.  These other systems can take computable contracts and do useful analysis not readily done with traditional contracts. For instance, a risk management system at a financial firm can take computable contracts as direct inputs for analysis, because, unlike traditional English contracts, computable contracts are data objects themselves.

II. Computable Contracts in More Detail

Having had a brief overview of computable contracts, the next few parts will discuss computable contracts in more detail.

A. What is a Computable Contract?

To understand computable contracts, it is helpful to start with a simple definition of a contract generally. 

A contract (roughly speaking) is a promise to do something in the future, usually according to some specified terms or conditions, with legal consequences if the promise is not performed.   For example, “I promise to sell you 100 shares of Apple stock for $400 per share on January 10, 2015.”

computable contract is a contract that has been deliberately expressed by the contracting parties in such a way that a computer can:

1) understand what the contract is about;

2) determine whether or not the contract’s promises have been complied with (in some cases).

How can a computer “understand” a contract, and how can compliance with legal obligations be “computed” electronically?

To comprehend this, it is crucial to first appreciate the particular problems that computable contracts were developed to address.

Read More

0

Supreme Court Gives Patent Law New Bite (Definiteness)

I want to thank Danielle Citron and the other folks at Concurring Opinions for inviting me to blog.  As Danielle mentioned in her introduction, I am a law professor at the University of Colorado Law School focused on technology and law.  (More info about me is here: http://harrysurden.com; Twitter: @Harry Surden).

Patent Law’s Definiteness Requirement Has New Bite

The Supreme Court may have shaken up patent law quite a bit with its recent opinion in the Nautilus v. Biosig case (June 2, 2014).

At issue was patent law’s “definiteness” requirement, which is related to patent boundaries. As I (and others) have argued, uncertainty about patent boundaries (due to vague, broad and ambiguous claim language), and lack of notice as to the bounds of patent rights, is a major problem in patent law.

I will briefly explain patent law’s definiteness requirement, and then how the Supreme Court’s new definiteness standard may prove to be a significant change in patent law. In short – many patent claims – particularly those with vague or ambiguous language – may now be vulnerable to invalidity attacks following the Supreme Court’s new standard.

Patent Claims: Words Describing Inventions

In order to understand “definiteness”, it’s important to start with some patent law basics.  Patent law gives the patent holder exclusive rights over inventions – the right to prevent others from making, selling, or using a patented invention.  How do we know what inventions are covered by a particular patent?  They are described in the patent claims. 

Notably, patent claims describe the inventions that they cover using (primarily) words.

For instance, in the Supreme Court case at issue, the patent holder – Biosig – patented an invention – a heart-rate monitor.  Their patent used the following claim language to delineate their invention :

I claim a heart rate monitor for use in association with exercise apparatus comprising…

live electrode

and a first common electrode mounted on said first half

 In spaced relationship with each other…”

Screen Shot 2014-06-06 at 9.32.30 AM

So basically, the invention claimed was the kind of heart rate monitor that you might find on a treadmill.   The portion of the claim above described one part of the overall invention – two electrodes separated by some amount of space.  Presumably the exercising person holds on to these electrodes as she exercises, and the device reads the heart rate.

( Note: only a small part of the patent claim is shown – the actual claim is much longer)

Patent Infringement: Comparing Words to Physical Products

So what is the relationship between the words of a patent claim and patent infringement?

In a typical patent infringement lawsuit, the patent holder alleges that the defendant is making or selling some product or process (here a product) that is covered by the language of a patent claim (the “accused product”).  To determine literal patent infringement, we compare the words of the patent claim to the defendant’s product, to see if the defendant’s product corresponds to what is delineated in the plaintiff’s patent claims.

For instance, in this case, Biosig alleged that Nautilus was selling a competing, infringing heart-rate monitor.  Literal patent infringement would be determined by comparing the words of Biosig’s patent claim (e.g. “a heart rate monitor with a live electrode…”) to a physical object –  the competing heart-rate monitor product that Nautilus was selling (e.g. does Nautilus’ heart rate monitor have a part that can be considered a “live electrode”)?

Literal patent infringement is determined by systematically marching through each element (or described part) in Biosig’s patent claim, and comparing it to Nautilus’s competing product. If Nautilus’ competing product has every one of the “elements” (or parts) listed in Biosig’s patent claim, then Nautilus’s product would literally infringe Biosig’s patent claim.

If patent infringement is found, a patent holder can receive damages or in some cases, use the power of the court  to prevent the competitor from selling the product through an injunction.

Patent Claims – A Delicate Balance with Words

Writing patent claims involves a delicate balance.  On the one hand, a patent claim must be written in broad enough language that such a patent claim will cover competitors’ future products.

Why?  Well, imagine that Biosig had written their patent claim narrowly.  This would mean that in place of the broad language actually used (e.g. “electrodes in a spaced relationship”), Biosig had instead described the particular characteristics of the heart-rate monitor product that Biosig sold.  For instance, if Biosig’s heart-rate monitor product had two electrodes that were located exactly 4 inches apart, Biosig could have written their patent claim with language saying, “We claim a heart rate monitor with two electrodes exactly 4 inches apart” rather than the general language they actually used, the two electrodes separated by a “spaced relationship”

However, had Biosig written such a narrow patent, it might not be commercially valuable.  Competing makers of heart rate monitors such as Nautilus could easily change their products to “invent around” the claim so as not to infringe. A competitor might be able to avoid literally infringing by creating a heart-rate monitor with electrodes that were 8 inches apart.  For literal infringement purposes, a device with electrodes 8 inches apart would not literally infringe a patent that claims electrodes “exactly 4 inches apart.”

From a patent holder’s perspective, it is not ideal to write a patent claim too narrowly, because for a patent to be valuable, it has to be broad enough to cover the future products of your competitors in such a way that they can’t easily “invent around” and avoid infringement.  A patent claim is only as valuable (trolls aside) as the products or processes that fall under the patent claim words.  If you have a patent, but its claims do not cover any actual products or processes in the world because it is written too narrowly, it will not be commercially valuable.

Thus, general or abstract words (like “spaced relationship”) are often beneficial for patent holders, because they are often linguistically flexible enough to cover more variations of competitors’ future products.

Patent Uncertainty – Bad for Competitors (and the Public)

By contrast, general, broad, or abstract claim words are often not good for competitors (or the public generally).  Patent claims delineate the boundaries or “metes-and-bounds” of patent legal rights  Other firms would like to know where their competitors’ patent rights begin and end.  This is so that they can estimate their risk of patent liability, know when to license, and in some cases, make products that avoid infringing their competitors’ patents.

However, when patent claim words are abstract, or highly uncertain, or have multiple plausible interpretations, firms cannot easily determine where their competitor’s patent rights end, and where they have the freedom to operate.  This can create a zone of uncertainty around research and development generally in certain areas of invention, perhaps reducing overall inventive activity for the public.

Read More