Archive for the ‘Antitrust’ Category
posted by Mark Patterson
(This is a guest post from Professor Mark R. Patterson of Fordham Law School. As someone who has participated in panels on antitrust with Prof. Patterson, I thought our readers would be interested in his perspective. –Frank Pasquale.)
The two claims above, from an essay by James Grimmelmann, are at the center of the conflict over regulation of search engines. Some argue that Google is a powerful gatekeeper for competing firms’ access to customers, so that it must operate in an objective or neutral manner to preserve a level competitive playing field. Those who make this argument necessarily assume that we can assess objectivity or neutrality in this context. Others, like Grimmelmann, support the first statement above, arguing that there is no objective, neutral means of assessing search results, so that there is no way to regulate search engines.
The European Commission (EC), having investigated Google’s practices and concluded that there are “competition concerns,” is apparently on the pro-regulation side, because it is entertaining proposed commitments from Google to address those concerns. (The U.S. F.T.C. conducted its own investigation and closed it without action, concluding that there was insufficient evidence to support the claim that Google’s practices lacked a legitimate business justification.) Google proposed a first set of commitments to the EC in April, but the Commission received “very negative” feedback from a market test of those commitments, so it asked Google for an improved proposal. Last month, Google proposed a second set of commitments. This new proposal was not put to a market test. Instead, the EC sent private inquiries to the complainants in the case and other market participants. Nevertheless, the proposal was leaked, and it offers much food for thought.
posted by Orly Lobel
What a rollercoaster week of incredibly thoughtful reviews of Talent Wants to Be Free! I am deeply grateful to all the participants of the symposium. In The Age of Mass Mobility: Freedom and Insecurity, Anupam Chander, continuing Frank Pasquale’s and Matt Bodie’s questions about worker freedom and market power, asks whether Talent Wants to Be Free overly celebrates individualism, perhaps at the expense of a shared commitment to collective production, innovation, and equality. Deven Desai in What Sort of Innovation? asks about the kinds of investments and knowledge that are likely to be encouraged through private markets versus. And in Free Labor, Free Organizations,Competition and a Sports Analogy Shubha Ghosh reminds us that to create true freedom in markets we need to look closely at competition policy and antitrust law. These question about freedom/controls; individualism/collectivity; private/public are coming from left and right. And rightly so. These are fundamental tensions in the greater project of human progress and Talent Wants to Be Free strives to shows how certain dualities are pervasive and unresolvable. As Brett suggested, that’s where we need to be in the real world. From an innovation perspective, I describe in the book how “each of us holds competing ideas about the essence of innovation and conflicting views about the drive behind artistic and inventive work. The classic (no doubt romantic) image of invention is that of exogenous shocks, radical breakthroughs, and sweeping discoveries that revolutionize all that was before. The lone inventor is understood to be driven by a thirst for knowledge and a unique capacity to find what no one has seen before. But the solitude in the romantic image of the lone inventor or artist also leads to an image of the insignificance of place, environment, and ties…”. Chapter 6 ends with the following visual:
Dualities of Innovation:
Individual / Collaborative
Passion / Profit
And yet, the book takes on the contrarian title Talent Wants to Be Free! We are at a moment in history in which the pendulum has shifted too far. We have too much, not too little, controls over information, mobility and knowledge. We uncover this imbalance through the combination of a broad range of methodologies: historical, empirical, experimental, comparitive, theoretical, and normative. These are exciting times for innovation research and as I hope to convince the readers of Talent, insights from all disciplines are contributing to these debates.
November 16, 2013 at 12:56 pm Posted in: Antitrust, Articles and Books, Behavioral Law and Economics, Book Reviews, Bright Ideas, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Innovation, Intellectual Property, Law and Psychology, Symposium (Talent Wants to be Free), Technology Print This Post No Comments
posted by Orly Lobel
As Catherine Fisk and Danielle Citron point out in their thoughtful reviews here and here, the wisdom of freeing talent must go beyond private firm level decisions; beyond the message to corporations about what the benefits of talent mobility, beyond what Frank Pasquale’s smartly spun as “reversing Machiavelli’s famous prescription, Lobel advises the Princes of modern business that it is better to be loved than feared.” To get to an optimal equilibrium of knowledge exchanges and mobility, smart policy is needed and policymakers must to pay attention to research. Both Fisk and Citron raise questions about the likelihood that we will see reforms anytime soon. As Fisk points out — and as her important historical work has skillfully shown, and more recently, as we witness developments in several states including Michigan, Texas and Georgia as well as (again as Fisk and Citron point out) in certain aspects of the pending Restatement of Employment — the movement of law and policy has actually been toward more human capital controls rather than less. This is perhaps unsurprising to many of us. Like with the copyright extension act which was the product of heavyweight lobbying, these shifts were supported by strong interest groups. What is perhaps different with the talent wars is the robust evidence that suggests that everyone, corporations large and small, new and old, can gain from loosening controls. Citron points to an irony that I too have been quite troubled by: the current buzz is about the intense need for talent, the talent drought, the shortage in STEM graduates. As Citron describes, the art and science of recruitment is all the rage. But while we debate reforms in schooling and reforms in immigration policies, we largely neglect to consider a reality of much deadweight loss of through talent controls.
The good news is that not only in Massachusetts, where the governor has just expressed his support in reforming state law to narrow the use of non-competes, but also in other state legislatures , courts and agencies, we see a greater willingness to think seriously about positive reforms. At the state level, the jurisdictional variations points to the double gain of regions that void or at least strongly narrow the use of non-competes. California for example gains twice: first by encouraging more human capital flow intra-regionally and second, by its willingness to give refuge to employees who have signed non-competes elsewhere. In other words, the positive effects stem not only from having the right policies of setting talent free but also from its comparative advantage vis-à-vis more controlling states. This brain gain effect has been shown empirically: areas that enforce strong post-employment controls have higher rates of departure of inventors to other regions. States that weakly enforce non-competes are on the receiving side of the cream of the crop. One can only hope that legislature and business leaders will take these findings very seriously.
At the federal level, in a novel approach to antitrust the federal government recently took up the investigation of anti-competitive practices between high-tech giants that had agreed not to poach one another’s employee. This in fact relates to Shubha Gosh’s questions about defining competition and the meaning of free and open labor markets. And it is a good moment to pause about the extent to which we encourage secrecy in both private and public organizations. It is a moment in which the spiraling scandals of economic espionage by governments coupled with leaks and demand for more transparency require us to think hard. In this context, Citron is right to raise the question of government 2.0 – for individuals to be committed and motivated to contribute to innovation, they need some assurances that their contributions will not be entirely appropriated by concentrated interests.
November 14, 2013 at 1:36 am Posted in: Antitrust, Articles and Books, Behavioral Law and Economics, Corporate Law, Economic Analysis of Law, Empirical Analysis of Law, Employment Law, Government Secrecy, Intellectual Property, Law and Psychology, Symposium (Talent Wants to be Free), Technology Print This Post One Comment
posted by Steve Semeraro
Prior installments in this series addressed the background leading up to the credit card merchant fee class action and the damages provisions in the b(3) opt out class action. This post addresses the injunctive relief provisions that the settlement in In re: Payment Card Interchange Fee and Merchant Discount Antitrust Litigation styles as a mandatory b(2) non-opt out class action. An upcoming final installment in this series will address the release provisions in the settlement.
B(2) classes are appropriate where the nature of the injunctive relief is such that it will necessarily affect every class member. After setting out the relief proposed in the settlement, I’ll provide some thoughts on whether b(2) is really an appropriate device for this case. Perhaps class action experts out there could weigh in on this issue in the comments.
The injunctive relief set out by the settlement is notable for what is not provided. Nothing in the settlement addresses the core concerns in the complaint about (1) the collective setting of a default interchange fee; (2) the rule prohibiting merchants from rejecting the cards of, surcharging the card transactions of, or otherwise discriminating against some card-issuing banks, but not others; or (3) the rules making it impossible for merchants to route transactions over the least expensive network.
posted by Steve Semeraro
This post will evaluate the settlement’s damages provisions. You can find my first post providing background on the litigation here. The settlement provided that upon the court’s preliminary approval, the card networks would pay $6.05 billion, 2/3 from Visa and 1/3 from MasterCard into a settlement fund. Depending on how many merchants chose to opt out, however, the defendants retained the right to reduce the fund through take down payments of up to 25% of the total and to kill the deal if opt outs exceeded that amount. Opt outs exceeded that amount, but the defendants have not abandoned the settlement. In addition to the flat fee award, Visa and MasterCard agreed to cut their applicable interchange fees by 10 basis points for eight months. Rather than actually reducing the fees paid by merchants, however, Visa and MasterCard would withhold 10 basis points from collected fees that would otherwise have been paid to card issuers. This amount would then be contributed into the settlement fund within 60 days from the expiration of the eight-month period. This contribution would be non-refundable, regardless of opt outs.
posted by Steve Semeraro
I’d like to thank Concurring Opinions for inviting me to blog about In re: Payment Card Interchange Fee and Merchant Discount Antitrust Litigation. This eight-year-old multi-district litigation has produced the largest proposed cash settlement in litigation history ($7.25 billion) along with what is perhaps the most extraordinary release from liability ever concocted. It may also be the most contentious. Over half the name plaintiffs and over 25% of the class, including most large merchants (think Walmart, Target) and most merchant organizations, have objected. On September 12, Eastern District of New York Judge John Gleesaon held a fairness hearing to consider the settlement, and the parties are awaiting his decision. An appeal is a virtual certainty.
This post will provide background on the credit card industry pricing mechanisms that led to this litigation, the legal issues in the case, and the structure of the settlement. (You can read more about the history of the credit card industry’s relationship to the antitrust laws here.) In subsequent posts, I’ll separately analyze the damages and relief provisions in the settlement. (If you can’t wait my working paper analyzing the settlement is here.) If there are particular issues that you’d like to read more about, let me know in the comments and I will respond in subsequent posts.
The credit card industry is atypical, but not unique, in that it competes in a two-sided market, i.e., one that serves two distinct customer bases. A card system like Visa provides both a purchasing device (credit cards) to consumers and a payment acceptance service to merchants. (By way of comparison, the legal blogging market is also two-sided. Concurring Opinions provides both an information forum to its readers and a platform to its advertisers.)
posted by Deven Desai
Allow me to introduce my friend and colleague, Prof. Steve Semeraro. Steve’s research focuses on antitrust and criminal law. He authored the Law Professors’ Amicus Brief in the U.S. Supreme Court case Verizon v. Trinko. He currently serves as the Book Review Editor of the American Journal of Legal History and the antitrust & competition expert for the Ethics & Compliance Alliance. He is a graduate of Stanford Law School and has worked at the Department of Justice, Antitrust Division, where he led civil antitrust investigations of the optical disc and credit card industries. That brings us to why I asked him to guest blog for us.
Steve’s work on the $7.25 billion Visa and Mastercard settlement which addresses disputes between merchants and Visa and Mastercard was cited by Professor Alan Sykes, the court appointed expert for the settlement. I asked Steve to post a bit about the settlement. He has agreed. So welcome, Steve, and we look forward to your posts.
posted by Frank Pasquale
Celebrated in the tech press only a week ago, the FTC inaction (and non-explanation of its inaction) with respect to search bias concerns is already starting to curdle. The FT ran a front page headline titled “Europe Takes Tough Stance on Google.” Another story included this striking comment from the EU’s competition chief:
Almunia insists that the Federal Trade Commission decision will be “neither an obstacle [for the European Commission] nor an advantage [for Google]. You can also think, well, this European authority, the commission, has received a gift from the American authorities, given that now every result they will get will be much better than the conclusions of the FTC,” he said with playful confidence. “Google people know very well that they need to provide results and real remedies, not arguments or comparisons with what happened on the other side [of the Atlantic].”
In response to allegations of search bias, Google has essentially said, “Trust us.” And at the end of its investigation into the potential bias, the FTC has essentially said the same. One public interest group has already put in a FOIA request for communications between Google and the FTC. Consumer Watchdog has requested a staff report that was reported to have recommended more robust action. Will Google, an advocate of openness in government and the internet generally, hold firm to its professed principles and commend those requests?
Read the rest of this post »
posted by Frank Pasquale
1) Commissioner Rosch included this intriguing footnote in his concurrence/dissent:
I . . . have concerns that insofar as Google has monopoly or near-monopoly power in the search advertising market and this power is due in whole or in part to its power over searches generally, nothing in this “settlement” prevents Google from telling “half-truths”–for example, that its gathering of information about the characteristics of a consumer is done solely for the consumer’s benefit, instead of also to maintain a monopoly or near-monopoly position. . . .That is a genuine cause for “strong concern.”
Did Google ever say that it was gathering data purely for consumers’ benefit? That would seem to be an odd representation for a for-profit company to make.
Read the rest of this post »
posted by Deven Desai
After the genericism piece, brands were on my mind and luckily some friends knew it. My brand project was the focus of my work at Princeton’s Center for Information Technology. Brett Frischmann knew that Spencer Waller was thinking about brands as was I. Spencer and I connected, and Brands, Competition and the Law was born. We argued that brands do much more than trademark or antitrust law recognizes. Brands indicate more than source and quality, enable non-price factors to differentiate products, and drive consumption for non-functional reasons. Furthermore, as business and marketing folks know, brands allow for rent extraction. Brands allow prices to remain high even in markets where one might expect them to converge. Brands “ensconce[e] price dispersion, … instead of a competitive market that brings prices down, prices remain dispersed above marginal cost.” Michael Baye and John Morgan’s work shows this for an online market no less. We turned to antitrust and found that antitrust law simply does not account for brands well. Market definition is odd here. For a strong brand is in a way its own market. Glynn Lunney’s work on Trademark Monopolies (no ssrn that I saw) was most helpful there. Price discrimination might signal a change in how one defines the market. Brands allow such actions but are ignored. There’s more on how understanding brands would change the way anti-trust might run. But I leave those interested to read the paper OR there is this offer.
The work led to a conference at University College London hosted by Ioannis Lianos where our abstract framed the day’s discussion.
Now, if you like, the follow up conference is in the U.S.
It will be in Chicago on October 19, 2012. Registration is here.
posted by Stanford Law Review
Continuing our dialog on antitrust enforcement, the Stanford Law Review Online has just published an Essay by Daniel A. Crane entitled The Obama Justice Department’s Merger Enforcement Record. Professor Crane responds to Carl Shapiro and Jonathan Baker’s criticism of his response to his earlier Essay:
My recent Essay, Has the Obama Justice Department Reinvigorated Antitrust Enforcement?, examined the three major areas of antitrust enforcement—cartels, mergers, and civil non-merger—and argued that, contrary to some popular impressions, the Obama Justice Department has not “reinvigorated” antitrust enforcement. Jonathan Baker and Carl Shapiro have published a response, which focuses solely on merger enforcement. Baker and Shapiro’s argument that the Obama Justice Department actually did reinvigorate merger enforcement is unconvincing.
Jon Baker and Carl Shapiro are smart, effective economists for whom I have great respect. I have few quarrels with how they or the Obama Administration in general conduct antitrust enforcement. The point of my essay was that antitrust enforcement has become largely technocratic and independent of political ideology. I have heard nothing that dissuades me from that view.
Read the full article, The Obama Justice Department’s Merger Enforcement Record by Daniel A. Crane, at the Stanford Law Review Online.
September 6, 2012 at 3:03 pm Tags: Antitrust, merger enforcemenet, mergers, Obama administration, Policy Posted in: Antitrust, Corporate Law, Current Events, Empirical Analysis of Law, Law Rev (Stanford), Politics Print This Post No Comments
posted by Frank Pasquale
In 1968, a group of law student researchers helped Ralph Nader publish a highly critical report on the Federal Trade Commission. They concluded that the FTC failed to “detect violations systematically,” to “establish efficient priorities for its enforcement energy,” to “enforce the powers it has with energy and speed,” and to “seek sufficient statutory authority to make its work effective.” As Tim Muris notes, the report “lambast[ed] the agency and characteriz[ed] its overall performance as ‘shockingly poor.’”
The FTC has taken many important initiatives to respond to concerns identified in the report. But we must now reconsider agency’s record, as the digital world changes kaleidoscopically and budget restraints hamstring even the best-intentioned FTC staff.
About the closest thing we’re likely to get to another “Nader Report” was Peter Maass’s expose in Wired on challenges facing privacy enforcement and consumer protection in the digital age. Here’s one of the many issues he identifies:
The mismatch between FTC aspirations and abilities is exemplified by its Mobile Technology Unit, created earlier this year to oversee the exploding mobile phone sector. The six-person unit consists of a paralegal, a program specialist, two attorneys, a technologist and its director, Patricia Poss. For the FTC, the unit represents an important allocation of resources to protect the privacy rights of more than 100 million smartphone owners in America. For Silicon Valley, a six-person team is barely a garage startup. Earlier this year, the unit issued a highly publicized report on mobile apps for kids; its conclusion was reflected in the subtitle, “Current Privacy Disclosures Are Disappointing.” It was a thin report, however. Rather than actually checking the personal data accessed by the report’s sampling of 400 apps, the [17 page] report just looked at whether the apps disclose, on the sites where they are sold, the types of personal data that would be accessed and what the data would be used for.
As Maass notes, “The agency can take companies to court, but its overworked lawyers don’t really have the time to go the distance against the bottomless legal staffs in Silicon Valley.” Like an SEC pushed by budget constraints to pursue mere “cost of doing business” settlements, the FTC too often has to capitulate to symbolic penalties with dubious deterrent effect.
Read the rest of this post »
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Jonathan Baker and Carl Shapiro entitled Evaluating Merger Enforcement During the Obama Administration. Professors Baker and Shapiro take issue with Daniel Crane’s assertions in his Essay of July 18:
We recently concluded that government merger enforcement statistics “provide clear evidence that the Obama Administration reinvigorated merger enforcement, as it set out to do.” Three weeks later, in an article published in the Stanford Law Review Online, Professor Daniel A. Crane reached the opposite conclusion, claiming that “[t]he merger statistics do not evidence ‘reinvigoration’ of merger enforcement under Obama.”
Crane is simply wrong. The data regarding merger enforcement unambiguously support our conclusion and cannot reasonably be read to support Crane’s assertions. Crane’s conclusion regarding merger enforcement is inaccurate because he relies upon flawed metrics and overlooks or misinterprets other important evidence.
Our analysis of merger enforcement at the DOJ during the George W. Bush Administration—based on the enforcement statistics and more—showed that it was unusually lax and in need of reinvigoration. It is too early to reach a comparably definitive conclusion about merger enforcement at the DOJ during the Obama Administration, but nothing in Daniel Crane’s article seriously challenges our interpretation of the preliminary data as demonstrating that the necessary reinvigoration has taken place.
Read the full article, Evaluating Merger Enforcement During the Obama Administration by Jonathan Baker and Carl Shapiro, at the Stanford Law Review Online.
August 21, 2012 at 9:30 am Tags: Antitrust, bush administration, executive branch, FTC, merger enforcement, mergers, Obama administration, Politics Posted in: Antitrust, Empirical Analysis of Law, Law Rev (Stanford), Politics Print This Post One Comment
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Daniel Crane entitled Has the Obama Justice Department Reinvigorated Antitrust Enforcement?. Professor Crane assesses antitrust enforcement in the Obama and Bush administrations using several empirical measures:
The Justice Department’s recently filed antitrust case against Apple and several major book publishers over e-book pricing, which comes on the heels of the Justice Department’s successful challenge to the proposed merger of AT&T and T-Mobile, has contributed to the perception that the Obama Administration is reinvigorating antitrust enforcement from its recent stupor. As a candidate for President, then-Senator Obama criticized the Bush Administration as having the “weakest record of antitrust enforcement of any administration in the last half century” and vowed to step up enforcement. Early in the Obama Administration, Justice Department officials furthered this perception by withdrawing the Bush Administration’s report on monopolization offenses and suggesting that the fault for the financial crisis might lie at the feet of lax antitrust enforcement. Even before the AT&T and Apple cases, media reports frequently suggested that antitrust enforcement is significantly tougher under President Obama.
For better or worse, the Administration’s enforcement record does not bear out this impression. With only a few exceptions, current enforcement looks much like enforcement under the Bush Administration. Antitrust enforcement in the modern era is a technical and technocratic enterprise. Although there will be tweaks at the margin from administration to administration, the core of antitrust enforcement has been practiced in a relatively nonideological and nonpartisan way over the last several decades.
Two points stressed earlier should be stressed again: (1) statistical measures of antitrust enforcement are an incomplete way of understanding the overall level of enforcement; and (2) to say that the Obama Administration’s record of enforcement is not materially different than the Bush Administration’s is not to chide Obama for weak enforcement. Rather, it is to debunk the claims that antitrust enforcement is strongly dependent on politics.
This examination of the “reinvigoration” claim should not be understood as acceptance that tougher antitrust enforcement is always better. Certainly, there have been occasions when an administration would be wise to ease off the gas pedal. At present, however, there is a high degree of continuity from one administration to the next.
Read the full article, Has the Obama Justice Department Reinvigorated Antitrust Enforcement? by Daniel Crane, at the Stanford Law Review Online.
July 18, 2012 at 10:15 am Tags: Antitrust, Corporate Law, law enforcement, Obama administration Posted in: Antitrust, Empirical Analysis of Law, Law Rev (Stanford), Politics Print This Post One Comment
posted by Frank Pasquale
Tim Wu’s opinion piece on speech and computers has attracted a lot of attention. Wu’s position is a useful counterpoint to Eugene Volokh’s sweeping claims about 1st Amendment protection for automated arrangements of information. However, neither Wu nor Volokh can cut the Gordian knot of digital freedom of expression with maxims like “search is speech” or “computers can’t have free speech rights.” Any court that respects extant doctrine, and the normative complexity of the new speech environment, will need to take nuanced positions on a case-by-case basis.
Wu states that “The argument that machines speak was first made in the context of Internet search,” pointing to cases like Langdon v. Google, Kinderstart, and SearchKing. In each scenario, Google successfully argued to a federal district court that it could not be liable in tort for faulty or misleading results 1) because it “spoke” the offending arrangement of information and 2) the arrangement was Google’s “opinion,” and could not be proven factually wrong (a sine qua non for liability).
Read the rest of this post »
June 25, 2012 at 12:40 pm Posted in: Antitrust, Constitutional Law, Consumer Protection Law, First Amendment, Google & Search Engines, Google and Search Engines, Privacy, Technology Print This Post 4 Comments
posted by Barbara van Schewick
Over the past ten years, the debate over “network neutrality” has remained one of the central debates in Internet policy. Governments all over the world have been investigating whether legislative or regulatory action is needed to limit the ability of providers of Internet access services to interfere with the applications, content and services on their networks.
In addition to rules that forbid network providers from blocking applications, content and services, rules that forbid discrimination are a key component of any network neutrality regime. Non-discrimination rules apply to any form of differential treatment that falls short of blocking. Policy makers who consider adopting network neutrality rules need to decide which, if any, forms of differential treatment should be banned. These decisions determine, for example, whether a network provider is allowed to provide low-delay service only to its own streaming video application, but not to competing video applications; whether network providers can count only traffic from unaffiliated video applications, but not their own Internet video applications towards users’ monthly bandwidth cap; or whether network providers can charge different Internet access charges depending on the application used, independent of the amount of traffic created by the application.
posted by Frank Pasquale
Yesterday a vice president of the European Commission announced preliminary conclusions regarding the EU’s antitrust investigation into Google. The EC has warned Google to “change or face fines,” as Alex Barker puts it, noting “possible antitrust problems in how Google favours its own products in search results.” I cannot predict exactly how far US cases will go, or if the EC’s efforts to guide the development of the search market will succeed. (I have offered some preliminary thoughts at Danny Sokol’s excellent symposium on Google at the Antitrust & Competition Law Blog.) However, I applaud the EC for its attention to the matter.
After attending the “Regulating Search” conference in 2005, I spent some of my early academic career trying to understand whether complaints about Google had merit. I was publishing on the matter in 2006, and have continued to do so. When I started writing about this topic, some established scholars mocked my interest in it. After I published Federal Search Commission? with a co-author, one IP professor loudly scoffed that “maybe we need a federal map commission” at a conference where the restaurant location was unclear. Establishment voices who have fought for net neutrality looked with disdain or bored incomprehension at someone who dared to question a Silicon Valley darling. One scholar even threw a draft of mine on the table at a faculty talk, loudly muttered “This is not scholarship!,” and boldly predicted that Google’s dominance of search couldn’t last for more than a few years. (That was in 2008.)
I don’t know whether the EU’s actions today will lead these skeptics to a different view of my work, or to condemnations of creeping socialism. But I do think the EU has now confirmed that it was appropriate for a legal scholar to raise the types of questions I have posed over the past six years. They deserved to be part of the agenda of internet law.
This is a somewhat roundabout (and hopefully not too self-pitying) response to Frank Bowman’s earlier post on the role of outside funding in academic research (and particularly Eugene Volokh’s intervention regarding First Amendment protection for search results). Like Bowman, I worry about the effect of outside money on research. However, I think it is often the academy’s own biases and presumptions that most threaten independent thought.
Read the rest of this post »
posted by Peter Swire
Yesterday I gave a presentation on “The Right to Data Portability: Privacy and Antitrust Analysis” at a conference at the George Mason Law School. In an earlier post here, I asked whether the proposed EU right to data portability violates antitrust law.
I think the presentation helped sharpen the antitrust concern. The presentation first develops the intuition that consumers should want a right to data portability (RDP), which is proposed in Article 18 of the EU Data Protection Regulation. RDP seems attractive, at least initially, because it might prevent consumers getting locked in to a software platform, and because it advances the existing EU right of access to one’s own data.
Turning to antitrust law, I asked how antitrust law would consider a rule that, say, prohibits an operating system from being integrated with software for a browser. We saw those facts, of course, in the Microsoft case decided by the DC Circuit over a decade ago. Plaintiffs asserted an illegal “tying” arrangement between Windows and IE. The court rejected a per se rule against tying of software, because integration of software can have many benefits and innovation in software relies on developers finding new ways to put things together. The court instead held that the rule of reason applies.
RDP, however, amounts to a per se rule against tying of software. Suppose a social network offers a networking service and integrates that with software that has various features for exporting or not exporting data in various formats. We have the tying product (social network) and the tied product (module for export or not of data). US antitrust law has rejected a per se rule here. The EU proposed regulation essentially adopts a per se rule against that sort of tying arrangement.
Modern US and EU antitrust law seek to enhance “consumer welfare.” If the Microsoft case is correct, then a per se rule of the sort in the Regulation quite plausibly reduces consumer welfare. There may be other reasons to adopt RDP, as discussed in the slides (and I hope in my future writing). RDP might advance human rights to access. It might enhance openness more generally on the Internet. But it quite possibly reduces consumer welfare, and that deserves careful attention.
May 17, 2012 at 3:56 pm Tags: Antitrust, Privacy, right to data portability Posted in: Administrative Law, Antitrust, Cyberlaw, Economic Analysis of Law, Privacy (Consumer Privacy), Web 2.0 Print This Post No Comments
posted by Brett Frischmann
I am incredibly grateful to Danielle, Deven, and Frank for putting this symposium together, to Concurring Opinions for hosting, and to all of the participants for their time and engagement. It is an incredible honor to have my book discussed by such an esteemed group of experts.
Shared infrastructures shape our lives, our relationships with each other, the opportunities we enjoy, and the environment we share. Think for a moment about the basic supporting infrastructures that you rely on daily. Some obvious examples are roads, the Internet, water systems, and the electric power grid, to name just a few. In fact, there are many less obvious examples, such as our shared languages, legal institutions, ideas, and even the atmosphere. We depend heavily on shared infrastructures, yet it is difficult to appreciate how much these resources contribute to our lives because infrastructures are complex and the benefits provided are typically indirect.
The book devotes much-needed attention to understanding how society benefits from infrastructure resources and how management decisions affect a wide variety of private and public interests. It links infrastructure, a particular set of resources defined in terms of the manner in which they create value, with commons, a resource management principle by which a resource is shared within a community.
Infrastructure commons are ubiquitous and essential to our social and economic systems. Yet we take them for granted, and frankly, we are paying the price for our lack of vision and understanding. Our shared infrastructures—the lifeblood of our economy and modern society—are crumbling. We need a more systematic, long-term vision that better accounts for how infrastructure commons contribute to social welfare.
In this book, I try to provide such a vision. The first half of the book is general and not focused on any particular infrastructure resource. It cuts across different resource systems and develops a framework for understanding societal demand for infrastructure resources and the advantages and disadvantages of commons management (by which I mean, managing the infrastructure resource in manner that does not discriminate based on the identity of the user or use). The second half of the book applies the theoretical framework to different types of infrastructure—e.g., transportation, communications, environmental, and intellectual resources—and examines different institutional regimes that implement commons management. It then wades deeply into the contentious “network neutrality” debate and ends with a brief discussion of some other modern debates.
Throughout, I raise a host of ideas and arguments that probably deserve/require more sustained attention, but at 436 pages, I had to exercise some restraint, right? Many of the book’s ideas and arguments are bound to be controversial, and I hope some will inspire others. I look forward to your comments, criticisms, and questions.
April 24, 2012 at 3:05 pm Posted in: Administrative Law, Antitrust, Bright Ideas, Cyberlaw, Economic Analysis of Law, First Amendment, Google & Search Engines, Infrastructure Symposium, Innovation, Intellectual Property, Legal Theory, Media Law, Property Law, Technology, Uncategorized Print This Post No Comments
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by William B. Gould IV entitled The 2011 Basketball Lockout: The Union Lives to Fight Another Day—Just Barely. Gould, a former chairman of the National Labor Relations Board, provides a succinct postmortem on the 2011 lockout:
The backdrop for the 2011 negotiations was the economic weapon once regarded as a dirty word in the lexicon of American labor-management relations—the lockout. This economic weaponry, endorsed by the Supreme Court since 1965, became the flavor of the two prior decades; baseball flirted with it in 1990, basketball in 1995 and 1999. One of hockey’s lockouts even resulted in the cancellation of the entire 2004-05 season. The lockout again was utilized in 2011 by recently peaceable football as well as by basketball. The owners gravitated towards the lockout tactic because in the event of strike (protesting changes in conditions in employment, which proved ineffective), players who crossed the union picket line could play and still sue in antitrust simultaneously. The lockout put more pressure on the players to settle. . . . The union now was represented by David Boies, who had only a few months before represented the NFL and successfully deprived that union of its only effective antitrust remedy—i.e., an injunction against the lockout, which would have required the owners to open the camps in early summer. Thus the basketball union now would not pursue the injunction remedy, notwithstanding the persuasiveness of Judge Bye’s dissenting opinion in the football case. Of course, Boies would have met himself coming around the corner if he argued for it in basketball.
Nonetheless, even though the union was stripped of its most effective antitrust remedy, litigation seems to have moved the parties together. It most certainly called the NBA’s bluff, in that the league’s regressive or inferior option was quickly forgotten. True, the NBA obtained givebacks that are estimated to be worth more than $300 million. Not only did it win on revenue sharing with the players—the players will possess between 49% and 51% as opposed to 57%—but more stringent luxury tax penalties for violators also have been instituted. As National Basketball Players Association Executive Director Billy Hunter said, the latter element constitutes the “harshest element of the new system.” At the same time, guaranteed contracts were preserved, restricted free agents will benefit from the reduction of the so-called “match period” when teams may match competing offers from seven to three days, which may encourage bidding on these players. The cap remains soft in that the so-called incumbent “Bird” players (named for Celtics superstar Larry Bird) may exceed the cap and have more expansive increases and lengths of contracts than other players. A so-called “amnesty” for bad contracts was permitted, in that even though the contracts must be paid, a player on each club may be waived and his salary not counted towards his team’s cap. What appeared to be a rout of the players in November emerged as a reasonable face-saving compromise.
Read the full article, The 2011 Basketball Lockout: The Union Lives to Fight Another Day—Just Barely by William B. Gould IV, at the Stanford Law Review Online.
Note: Updated quotation.
January 25, 2012 at 1:34 pm Tags: Antitrust, labor law, lockout, NBA, professional sports, strike, unions Posted in: Antitrust, Current Events, Law Rev (Stanford), Supreme Court Print This Post No Comments