Category: Antitrust

0

Antitrust in Obamaland

Antitrust enforcement was one area where most observers expected significant changes from the Bush years, particularly at the Antitrust Division of the Justice Department. For the past eight years, the Antitrust Division had vigorously prosecuted cartels, but had not been active in monopolization or merger enforcement. In addition to bringing relatively few cases in these areas, the Division had filed a number of amicus briefs in support of defendants, opposed a petition for certiorari sought by its sister agency the Federal Trade Commission, and issued a number of reports and policy recommendations that restricted the reach of the antitrust laws or imposed significant burdens on private plaintiffs. During this same period, the FTC proved to be more active in the competition area, particularly in the health care and intellectual property fields which suggests that the FTC will have a greater continuity in the competition area despite key changes at the Commissioner and staff levels.

The key officials in the Obama administration came into the antitrust agencies promising change. Christine Varney, the new head of the Antitrust Division, gave a speech in her early days promising more vigorous enforcement and hearkening back to the days of Thurman Arnold during the latter half of the New Deal. At the same time, she repudiated a highly restrictive report on monopoly power issued during the waning days of the prior administration issued by the Justice Department alone because a majority of the FTC had refused to endorse. In addition, the Division has reversed policy and filed an amicus brief in support of plaintiffs in a key Supreme Court case involving the pharmaceutical industry. Most recently, the Justice Department and the FTC jointly announced a new initiative to revisit the Merger Guidelines of the 1990s used by both agencies to decide which mergers and acquisitions to challenge on competition grounds. Read More

4

The Informant!

It’s not often that I hear about a new Hollywood movie based on the facts of a case that I first encountered while clerking, but The Informant!, directed by Steven Soderbergh and starring Matt Damon, is just such a film. It tells the story of Mark Whitacre, a central actor in a case decided while I was clerking for my judge on the Seventh Circuit. Whitacre served as the key informant in a successful FBI investigation into price-fixing charges against Archer Daniels Midland Co. that sent top executives to prison. As my co-clerk Kevin Metz observed, the case featured the type of direct evidence of an agreement to fix prices that antitrust professors explain is almost never available in antitrust prosecution. Whitacre secretly recorded many hours of conversations with co-conspirators in the lysine industry over three years, all while bragging carelessly to others about his role as an FBI informant and embezzling millions from ADM under the FBI’s nose. During my clerkship year, we worked on a number of memorable cases, but United States v. Andreas probably featured the most colorful facts. Whitacre was a very odd and unpredictable personality who suffered from bipolar disorder, which Matt Damon plays up for comic effect in the movie.

Google Books and the Limits of Courts

GoogleBooksThe Google Books litigation has inspired a lot of commentary on the web. As an early October fairness hearing approaches, a consensus appears to be building: the proposed settlement is too important and complex for a court to approve in its current form. Agent Lynn Chu has complained that “No one elected the[] ‘class representatives’ to represent America’s tens of thousands of authors and publishers to convey their digital rights to Google.” Pamela Samuelson, by all accounts one of the leading academics in American intellectual property law, has this to say:

The Google Book Search settlement will be, if approved, the most significant book industry development in the modern era [emphasis added]. . . . The Authors Guild has about 8000 members. OCLC has estimated that there are 22 million authors of books published in the U.S. since 1923 (the year before which books can be presumed to be in the public domain). Jan Constantine, a lawyer for the Authors Guild, is optimistic that authors and publishers of out-of-print books will sign up with the Registry, but there are many reasons to question this.

For one thing, the proposed settlement agreement implicitly estimates that only about 750,000 copyright owners will sign up with the Registry, at least in the near term. Second, many books are “orphans,” that is, books whose rights holders cannot be located by a reasonably diligent search. Third, many easily findable rights holders, particularly academic authors, would much rather make their works available on an open access basis than to sign up with the Registry. Fourth, signing up with the Registry will not be a simple matter, since the Registry won’t just take your word for it that you are the rights holder. You are going to have to prove your ownership claim.

The non-representativeness of the class is one ground on which it is possible to object to the proposed Book Search settlement. Other reasons to object or express concerns will be explored in subsequent articles. Objections must be filed with the court by September 4, 2009.

A suitable platform for hosting public discussions of the deal only launched a few weeks ago, thanks to the diligent efforts of James Grimmelmann (who is also organizing an academic conference on the issue in October). The proposed settlement raises a number of issues, which may only be addressed by extensive regulation of the project — or a public alternative dedicated to serving those marginalized by the current proposal.
Read More

From Antitrust to Anti-Systemic Risk

The “optimal size and complexity of developing countries’ financial systems” has been hotly debated in the economics community. Writing for the Harvard Business Review & Boston Globe, Duncan Watts focuses on our own dilemmas in a provocative account of complex systems:

[G]lobally interconnected and integrated financial networks just may be too complex to prevent crises like the current one from reoccurring. . . . A 2006 report co-sponsored by the Federal Reserve Bank of New York and the National Academy of Sciences concluded that even defining systemic risk was beyond the scope of any existing economic theory. Actually managing such a thing would be harder still, if only because the number of contingencies that a systemic risk model must anticipate grows exponentially with the connectivity of the system.

So if the complexity of our financial systems exceeds that of even the most sophisticated risk models, how can government regulators hope to manage the problem? There is no simple solution, but one approach is close to what the government already does when it decides that some institutions are “too big to fail,” and therefore must be saved – a strategy that, as we have seen recently, can cost hundreds of billions of taxpayer dollars. . . .

An alternate approach is to deal with the problem before crises emerge. On a routine basis, regulators could review the largest and most connected firms in each industry, and ask themselves essentially the same question that crisis situations already force them to answer: “Would the sudden failure of this company generate intolerable knock-on effects for the wider economy?” If the answer is “yes,” the firm could be required to downsize, or shed business lines in an orderly manner until regulators are satisfied that it no longer poses a serious systemic risk. Correspondingly, proposed mergers and acquisitions could be reviewed for their potential to create an entity that could not then be permitted to fail.

Of course, our system has been headed in precisely the opposite direction, largely thanks to the “best and brightest” now at Treasury and the Fed. As Simon Johnson puts it, we “pay too much deference to the expertise and presumed wisdom of a sector that screwed up massively.”

Google Book Search Scrutiny

Writing in Slate, Mark Gimein knocks down a number of straw man arguments against the Google Book search deal. I look forward to seeing how he grapples with more serious concerns, like those raised by James Grimmelmann. I’ve also been impressed by Christopher Suarez’s working paper on the need for antitrust scrutiny of the proposed deal . Suarez proposes a number of sensible settlement modifications that I hope the court will take seriously. It doesn’t have much time to get this right, as the following conference announcement shows:
Read More

Toward a Public Alternative in Digital Archiving and Search

With inimitable clarity, Cory Doctorow made the case for an open alternative to Google in The Guardian earlier this month. He focused on the secrecy of search:

[S]earch engines routinely disappear websites for violating unpublished, invisible rules. Many of these sites are spammers, link-farmers, malware sneezers and other gamers of the system. . . . The stakes for search-engine placement are so high that it’s inevitable that some people will try anything to get the right placement for their products, services, ideas and agendas. Hence the search engine’s prerogative of enforcing the death penalty on sites that undermine the quality of search.

[Nevertheless, i]t’s a terrible idea to vest this much power with one company, even one as fun, user-centered and technologically excellent as Google. It’s too much power for a handful of companies to wield.

Search engines like Google have some good reasons for keeping their algorithms confidential–if they were public, manipulators could quickly swamp Google users with irrelevant results. However, just as Comcast cannot circumvent net neutrality regulation by saying all its traffic management and spam-fighting methods are trade secrets, search engines should not be able to use such arguments to escape regulation altogether. Moreover, there are ways of developing a qualified transparency that would let a trusted third party examine a search engine’s conduct without exposing its business methods for all the world to see.

But Doctorow does not want regulation here–he wants an alternative. Having made a similar case for a “public option” in the case of health insurance, I like this line of argument, but I think Doctorow is underestimating the barriers to entry. Though he’s aware of the failure of Wikia, Doctorow wonders if a “wikipedia for search” could be built:

We can imagine a public, open process to write search engine ranking systems, crawlers and the other minutiae. But can an ad-hoc group of net-heads marshall the server resources to store copies of the entire Internet? . . . . It would require vast resources. But it would have one gigantic advantage over the proprietary search engines: rather than relying on weak “security through obscurity” to fight spammers, creeps and parasites, such a system could exploit the powerful principles of peer review that are the gold standard in all other areas of information security.

The “rival public system” approach has been suggested for search engines a few times before. About a decade ago, Introna & Nissenbaum demonstrated that “the conditions needed for a marketplace to function in a ‘democratic’ and efficient way are simply not met in the case of search engines.” Recognizing this, Jean-Noel Jeanneny made a case for a French language alternative to dominant US-based search engines. The Quaero project in the EU appears to be answering that call, though in a far more dirigiste manner than Doctorow would probably like.

I have a few thoughts on a “public option” in search, building on a talk I gave at Yale Law’s Library 2.0 conference in the spring.
Read More

An Antitrust Angle on the Public Plan

Is genuine health reform possible?  Several recent developments are promising.  President Obama’s big Congressional majorities (plus the Specter defection) are reminiscent of the Johnson-era milieu that led to Medicare and Medicaid.*   Key interest groups are less “Harry and Louise” and more “try to appease.”  Most importantly, the failures of managed care, consumer-directed health care, and other artifacts of the “ownership society” are now self-evident.  As unemployment rises, lack of insurance spikes, compounding the misery of many of those unlucky enough to get thrown out of work.

What could derail real health reform? Most likely, fake health care reform, particularly the kind that assumes there is something near a “free market” in operation now. As health care antitrust scholar Thomas Greaney argued yesterday, markets for health care are often very concentrated or riddled with barriers to entry:

The unfortunate fact is that a majority of the country is served by a few dominant insurers. (In 16 states, one insurer accounts for more than 50 percent of private enrollment; in 36 states, three insurers have more than 65 percent of enrollment). Likewise, because of lax antitrust enforcement, most markets are characterized by dominant hospital systems and little competition among high-end physician specialists.

In these circumstances, which economists call ‘bilateral monopoly,” the players often reach an accommodation in which they share the monopoly profits rather than compete vigorously. A prime example is the experience in Massachusetts, where Blue Cross/Blue Shield, the dominant insurer, reached an understanding with the dominant hospital system, Partners Healthcare, that entrenched higher prices for health insurance and hospital care.

Some might hold out hope that the Obama administration’s new emphasis on antitrust enforcement might solve that problem, but I would not hold my breath. After losing seven hospital merger cases in a row, the government is not exactly in a position to go storming into health care markets to demand competition. Only new antitrust laws are likely to accomplish much in that direction, and even if they were by some miracle adopted this year, I can’t imagine them having much effect within any reasonable time frame.
Read More

0

The Googlization of Advertising

Search engines are indispensable to the quest for helpful information in our data saturated age. Although custom search engines attract small audiences, the big three—Google, Yahoo, and Microsoft—run the lion share of online searches, with Google performing 62% of U.S. Internet searches and with Yahoo next in line running 17.5% of searches. Not surprisingly, Google attracts a disproportionate share of online advertisers, the main source of revenue for search companies. The recent joint venture advertising agreement between Google and Yahoo heralds the further concentration of online advertising in the search market from three to two hands by allowing Google to sell search ads that display next to Yahoo search results.

This Sunday, the Association of National Advertisers announced its opposition to the Google-Yahoo deal on the grounds that the partnership would “diminish competition, increase concentration of market power, limit choices currently available and raise prices to advertisers.” Frank Pasquale presented spirited and compelling testimony on this issue before the House Judiciary Committee’s Task Force on Competition Policy and Antitrust Laws this summer. (I attended the hearing and highly recommend viewing the C-SPAN recording—see here). As Pasquale brought alive at the hearing, the joint venture agreement would cement Google’s dominance over the online advertising market. Benjamin Edelman of Harvard Business School explains that such excessive market share allows Google to control the ads generally available (and unavailable) to consumers. For instance, in August 2004, Google banned an ad critical of President Bush, but, of course, consumers did not know what they were missing. Worth serious consideration is Pasquale’s concern that the opacity of Google’s practices enables it to conceal any abuse of its soon-to-be overwhelming power in the online advertising market.

If You Read One Article on Antitrust This Year. . .

make it Maurice Stucke’s Better Competition Advocacy, 82 St. John’s L. Rev. 951 (2008). In this work, he convincingly argues that “The goals of antitrust law enforcement are subsumed by, but not necessarily co-extensive with, the goals of competition policy.” Stucke’s article not only extends an impressive line of work on competition law, but also offers some insights on the dangers of over-specialization for legal scholars generally. I’ll offer some excerpts now, and try to apply the piece to some current controversies later this week.

Stucke addresses four main questions in his article:

Prevailing competition advocacy glosses over four fundamental questions: First, what is competition? Second, what are the goals of a competition policy? Third, how does one achieve, if one can, the objectives of such desired competition? Fourth, how does one know if the economy is progressing toward these goals?

Stucke argues that conventional competition policy based on the work of the Chicago School answers all these questions in narrow and unsatisfying ways.

Read More

8

WALL*E and the Theory of the Firm

WALL-Eposter.jpgOver the weekend my son and I saw WALL*E, Pixar’s new story about the adventures of a robot living on a post-environmental apocalypse Earth in which the land has been entirely covered by mountains of trash. As it turns out, more than 700 years before humanity had ditched the planet under the leadership of BnL Corp., the super-retailer that seems to have taken over the world, replacing not only the government but all other economic actors. Despite the apparently heavy-handed plot that I just summarized, WALL*E is a delightful movie, and the obvious jabs at Wall*Mart and other big-box retailers are delivered with such charm and — oddly given the post-apocalyptic setting — understatement that some-time Wall*Mart apologist that I am, I found myself carried effortlessly along by the story. That said, the vision of a world ruled by BnL Corp. got me thinking about the implicit theory of the firm underlying Pixar’s dystopia.

Firms, of course, are an embarrassment to economic theory. If the market is so good at coordinating the production of goods and services, why would you even see firms, which exist as islands of central planning in a sea of unplanned spontaneous order? Since Coase’s ground breaking article in the 1930s, the answer has been “transaction costs.” The central planning of the firm necessarily imposes costs given the informational constraints that managers necessarily labor under. On the other hand, so long as those costs are less than the cost of coordinating the same activity through spot contracts in the market, the firm is more efficient than the alternatives. So what gives with BnL Corp.? Why would one firm get so big as to engulf all others? Here are some thoughts.

Read More