Archive for the ‘Economic Analysis of Law’ Category
posted by Peter Swire
Yesterday I gave a presentation on “The Right to Data Portability: Privacy and Antitrust Analysis” at a conference at the George Mason Law School. In an earlier post here, I asked whether the proposed EU right to data portability violates antitrust law.
I think the presentation helped sharpen the antitrust concern. The presentation first develops the intuition that consumers should want a right to data portability (RDP), which is proposed in Article 18 of the EU Data Protection Regulation. RDP seems attractive, at least initially, because it might prevent consumers getting locked in to a software platform, and because it advances the existing EU right of access to one’s own data.
Turning to antitrust law, I asked how antitrust law would consider a rule that, say, prohibits an operating system from being integrated with software for a browser. We saw those facts, of course, in the Microsoft case decided by the DC Circuit over a decade ago. Plaintiffs asserted an illegal “tying” arrangement between Windows and IE. The court rejected a per se rule against tying of software, because integration of software can have many benefits and innovation in software relies on developers finding new ways to put things together. The court instead held that the rule of reason applies.
RDP, however, amounts to a per se rule against tying of software. Suppose a social network offers a networking service and integrates that with software that has various features for exporting or not exporting data in various formats. We have the tying product (social network) and the tied product (module for export or not of data). US antitrust law has rejected a per se rule here. The EU proposed regulation essentially adopts a per se rule against that sort of tying arrangement.
Modern US and EU antitrust law seek to enhance “consumer welfare.” If the Microsoft case is correct, then a per se rule of the sort in the Regulation quite plausibly reduces consumer welfare. There may be other reasons to adopt RDP, as discussed in the slides (and I hope in my future writing). RDP might advance human rights to access. It might enhance openness more generally on the Internet. But it quite possibly reduces consumer welfare, and that deserves careful attention.
May 17, 2012 at 3:56 pm Tags: Antitrust, Privacy, right to data portability Posted in: Administrative Law, Antitrust, Cyberlaw, Economic Analysis of Law, Privacy (Consumer Privacy), Web 2.0 Print This Post No Comments
posted by Dave Hoffman
As I explored in a previous post, some terrific co-authors and I have written a paper which taxonomizes federal complaints- that is, we find patterns in the kinds of causes of action that attorneys plead. In this post, I’m going to explore those patterns in some more detail.
In our data, spectral clustering revealed eight clusters of causes of action. Each grouping organizes together causes of action that are more likely to be pled together than they are to be pled with others. (This eight-cluster finding is probably not generalizable to all litigation – the paper goes into some detail about the kinds of cases that we included and excluded from our dataset.) When you think about it, that there will be some patterns from this kind of exercise is obvious — there are only a limited number of legally cognizable fact patterns that can cause injury, and attorneys often follow form books/precedent when pleading. Still, we didn’t know what the patterns would be before completing the analysis.
The Figure below provides the most common two or three causes of action per cluster:
This illustrates how, for example, intellectual property claims (like trademark infringement) often travel together with consumer protection claims; civil rights claims (like 1983 allegations) accompany state law torts; and tort claims often fit with contract and fraud claims. This should be old news to anyone who has ever practiced law. Moreover, the Figure doesn’t give us a good handle on how alike or unlike each pattern is from another. Follow me after the jump for the Figure that tries to accomplish just that.
posted by Brett Frischmann
In the book, I stress the limits of mathematical models and quantitative data in the infrastructure context because the models and data tend to be partial and distort by omission. The following footnote in the Conclusion captures my concern:
Economists strongly prefer to work with formal mathematical models and quantitative data, for good reasons, but this preference introduces considerable limitations. Among other things, this preference leads many economists to isolate a particular market or two to analyze, holding others constant and assuming them to be complete and competitive. This approach is highly distorting in the infrastructure context because infrastructure resources are often foundations for complex systems of many interdependent markets (complete and incomplete) and nonmarket systems. Economists may cordon off various nonmarket systems and corresponding social values because such phenomena are deemed to be outside the bounds of economics. (Recall the discussion in chapter 3 about such boundaries.) But to focus on markets and their interactions and ignore nonmarkets and relevant social values distorts the analysis of infrastructure, whether or not we label the analysis “economic” because it is within the conventional bounds of the discipline. Of course, many economists are well aware of these boundaries and the corresponding limits of their expertise and policy prescriptions. Nonetheless, these limits often are not apparent or well understood by policy makers and other consumers of economic analyses, and even when the limits are understood, there are various reasons why they may be disregarded — for example, ideology or political pressures.
J. Scott Holladay, an environmental economist, explained to me:
When conducting an economic valuation of an ecosystem, we are well aware of our limitations. In a valuation study, we identify environmental services and amenities that are valuable but cannot be valued via existing economic methods, and we may assign a non-numerical value to make clear that we are not assigning a value of zero, but when the valuation study is used by policy makers, those non-numerical values may effectively be converted to a zero value and the identified environmental services and amenities truncated from the analysis. Is that a fault of the economist or the policy maker?
To be clear, I do not assign fault to anyone. Rather, my aim is to examine the consequences of reductionism and shed light on the importance of what is often ignored (or truncated).
Now that the book is in print, I have gone back to this point—expressed in this footnote and elsewhere in the book—and wondered whether this will be something that readers find frustrating or illuminating. I have also started to puzzle about what to do about the problem, whether / how to develop better models and gather more and better data, etc. Any thoughts?
posted by Brett Frischmann
It is probably worth making it clear that, as I state multiple times in the book, my argument is not “if infrastructure, then commons.” Rather, I argue that if a resource is infrastructure—defined according to functional economic criteria I set forth in the book, then there are a series of considerations one must evaluate in deciding whether or not to manage the resource as a commons. Chapter four provides a detailed analysis of what resources are infrastructure, and chapter five provides a detailed analysis of the advantages and disadvantages of commons management from the perspective of private infrastructure owner (private strategy) and from the perspective of the public (public strategy). Chapters six, seven and eight examine significant complicating factors/costs and arguments against commons management.
After reviewing the excellent posts, it occurred to me that blog readers might come away with the mistaken impression that in the book I argue that the demand side always trumps the supply side or that classifying a resource as infrastructure automatically leads to commons management. That is certainly not the case. I do argue that the demand-side analysis of infrastructure identifies and helps us to better appreciate and understand a significant weight on one side of the scale, and frankly, a weight that is often completely ignored. Ultimately, the magnitude of the weight and relevant counterweights will vary with the infrastructure under analysis and the context.
In chapter thirteen, I argue that the case for network neutrality regulation—commons management as a public strategy applied in the context of Internet infrastructure—would remain strong even if markets were competitive. In his post, Tim disagreed with this position. In Tim’s view, competition should be enough to sustain an open Internet, for a few reasons, but mainly because consumers will appreciate (some of) the spillovers that are produced online and will be willing to pay for (and switch to) an open infrastructure, provided that competition supplies options. I replied to his post with some reasons why I disagree. In essence, I pointed out that consumers would not appreciate all of the relevant spillovers because many spillovers spill off-network and thus private demand would still fall short of social demand, and I also noted that I was less confident about his predictions about what consumers would want and how they would react. (My disagreement with Tim about the relevance of competition in the network neutrality context should not be read to mean that competition is unimportant. The point is that the demand-side market failures are not cured by competition, just as the market failures associated with environmental pollution are not cured by competition.)
In my view, the demand side case for an open, nondiscriminatory Internet infrastructure as a matter of public strategy/regulation is strong, and would remain strong even if infrastructure markets were competitive. But as I say at the end of chapter thirteen, it is not dispositive. Here is how I conclude that chapter:
My objective in this chapter has not been to make a dispositive case for network neutrality regulation. My objective has been to demonstrate how the infrastructure analysis, with its focus on demand-side issues and the function of commons management, reframes the debate, weights the scale in favor of sustaining end-to-end architecture and an open infrastructure, points toward a particular rule, and encourages a comparative analysis of various solutions to congestion and supply-side problems. I acknowledge that there are competing considerations and interests to balance, and I acknowledge that quantifying the weight on the scale is difficult, if not impossible. Nonetheless, I maintain that the weight is substantial. The social value attributable to a mixed Internet infrastructure is immense even if immeasurable. The basic capabilities the infrastructure provides, the public and social goods produced by users, and the transformations occurring on and off the meta-network are all indicative of such value.
posted by Frank Pasquale
Brett Frischmann’s book is a summa of infrastructural theory. Its tone and content approach the catechetical, patiently instructing the reader in each dimension and application of his work. It applies classic economic theory of transport networks and environmental resources to information age dilemmas. It thus takes its place among the liberal “big idea” books of today’s leading Internet scholars (including Benkler’s Wealth of Networks, van Schewick’s Internet Architecture and Innovation, Wu’s Master Switch, Zittrain’s Future of the Internet,and Lessig’s Code.) So careful is its drafting, and so myriad its qualifications and nuances, that is likely consistent with 95% of the policies (and perhaps theories) endorsed in those compelling books. And yet the US almost certainly won’t make the necessary investments in roads, basic research, and other general-purpose inputs that Frischmann promotes. Why is that?
Lawrence Lessig’s career suggests an answer. He presciently “re-marked” on Frischmann’s project in a Minnesota Law Review article. But after a decade at the cutting edge of Internet law, Lessig switched direction entirely. He committed himself to cleaning up the Augean stables of influence on Capitol Hill. He knew that even best academic research would have no practical impact in a corrupted political sphere.
Were Lessig to succeed, I have little doubt that the political system would be more open to ideas like Frischmann’s. Consider, for instance, the moral imperative and economic good sense of public investment in an era of insufficient aggregate demand and near-record-low interest rates:
The cost of borrowing to fund infrastructure projects, [as Economic Policy Institute analyst Ethan Pollack] points out, has hit record “low levels.” And the private construction companies that do infrastructure work remain desperate for contracts. They’re asking for less to do infrastructure work. “In other words,” says Pollack, “we’re getting much more bang for our buck than we usually do.”
And if we spend those bucks on infrastructure, we would also be creating badly needed jobs that could help juice up the economy. Notes Pollack: “This isn’t win-win, this is win-win-win-win.” Yet our political system seems totally incapable of seizing this “win-win-win-win” moment. What explains this incapacity? Center for American Progress analysts David Madland and Nick Bunker, see inequality as the prime culprit.
April 26, 2012 at 8:17 am Posted in: Economic Analysis of Law, Infrastructure Symposium, Innovation, Law and Inequality, Philosophy of Social Science, Political Economy, Politics, Symposium (Infrastructure), Technology Print This Post 2 Comments
posted by Barbara A. Cherry
Because the framing of issues is so critical to how policy debates are conducted and policy outcomes are ultimately chosen, Brett’s analysis contributes to more balanced discussion within policy debates related to governance of infrastructures. Brett’s book emphasizes the functional role both of infrastructure resources to society and of commons as a resource management strategy, providing important insights for considering appropriate governance of infrastructure resources. Its analytical strength stems from development of a typology of different infrastructures “based on the types of systems dependent on the infrastructural resource and the distribution of productive activities it facilitates” (p. 61), which is then used to understand the importance of (what Brett describes as) demand-side characteristics of various types of infrastructures. This demand-side functional approach is contrasted with the supply-side approach that has tended to dominate the focus of policy debates related to governance of infrastructures.
To understand Brett’s analysis, it is critical to understand the definitions of component terms and certain economic and legal concepts upon which his analysis is based. For this reason, one has to patiently work their way through substantial portions of the book that lay the foundation for understanding how his typology contributes to understanding commons management (a form of nondiscriminatory access rule) to infrastructures both generally and in specific contexts. This is a compliment – not a criticism – of how Brett took on the challenge of carefully constructing analytical arguments, particularly from concepts of law and economics of which readers are likely familiar but perhaps with differing shades of meaning.
However, it is also challenging to accurately incorporate the research of others who are also attempting to contribute towards a more balanced policy debate of governance related to access to infrastructures. In this regard, for me, a weakness in the analysis throughout Brett’s book is some inaccuracies (or insufficient clarity) as to the functional role of various bodies of law that have developed to address access problems in varying contexts. For example, discussion of common carriage (see p. 218) conflates origins of the common law of common carriage and public utilities. The origins of common carriage obligations are based on duties under tort law; and it is public utility law, not common carriage, that developed in part from laws of franchise and monopoly. But because some infrastructures – such as railroads, telegraphy and telephony – are both common carriers and public utilities, the distinctive functional roles of the two bodies of law have come to be conflated and misunderstood. This conflation, in turn, has tended to mislead discourse related to many deregulatory telecommunications policies, including network neutrality.
Therefore, in my view, the contribution of Brett’s work towards a more balanced policy discussion of governance of infrastructures would be further strengthened by juxtaposition of his functional approach to infrastructure resources with a more carefully delineated (and accurate), functional approach to the various bodies of law that have developed thus far to address varying forms of infrastructure access problems.
posted by Adam Thierer
[My thanks to Deven Desai, Frank Pasquale, and all the folks here at Concurring Opinions for inviting me to contribute to this symposium on infrastructure policy and Brett's important new book on the topic. -- AT]
As a textbook, there’s a lot to like about Brett Frischmann’s new book, Infrastructure: The Social Value of Shared Resources. He offers a comprehensive and highly accessible survey of the key issues and concepts, and outlines much of the relevant literature in the field. The student of infrastructure policy will benefit from Frischmann’s excellent treatment of public goods and social goods; spillovers and externalities; proprietary versus commons systems management; common carriage policies and open access regulation; congestion pricing strategies; and the debate over price discrimination for infrastructural resources. Frischmann’s book deserves a spot on your shelf whether you are just beginning your investigation of these issues or if you have covered them your entire life.
As a polemic that hopes to persuade the reader that “society is better off sharing infrastructure openly,” however, Frischmann’s book is less convincing. It certainly isn’t because I can’t find examples of some resources that might need to be managed as a commons or a collective resource. But there’s a question of balance and I believe Frischmann too often strikes it in favor of commons-based management based on the rationale that “citizens must learn to appreciate the social value of shared infrastructure” (p. xi), without fully appreciating the costs and complexities of making that the paramount value in this debate. Read the rest of this post »
April 25, 2012 at 10:03 am Tags: Buchanan, budgets, capture, commons, demand-side, Flyvbjerg, free lunch, incentives, infrastructure, investment, privatization, public choice, spending, supply-side, taxpayers Posted in: Economic Analysis of Law, Infrastructure Symposium, Politics, Symposium (Infrastructure) Print This Post 3 Comments
posted by Marvin Ammori
Brett Frischmann strikes me as the Sonic Youth of law professors. You might not know any Sonic Youth songs, but when you read interviews of your favorite artists, they all mention Sonic Youth as a huge influence on their own sound.
That is: Brett influences the influencers.
A few years back, Larry Lessig wrote a law review article responding to one of Brett’s early pieces on infrastructure and telecommunications. Brett followed up with pieces co-authored with leading communications scholar Barbara van Schewick and leading patent guru Mark Lemley. The others taking part in this symposium are a who’s who of heavy hitting thinkers in the field, from Georgetown’s Julie Cohen and Columbia’s Tim Wu to top Google lawyer Rick Whitt.
The uninitiated may wonder: what’s all the fuss about?
Brett does something in this book and in his work on infrastructure that is truly novel and very important. (Yes, all books and articles claim to be novel and important; Brett’s actually is.)
I think the most valuable contribution is that his economic analysis of infrastructure helps expose many of the flaws in our current thinking about the law and economics. While many argue that defining property rights and internalizing externalities are essential for economic growth and effective markets, Brett demonstrates that regulating some resources in commons and encouraging the “externalizing” of positive externalities (or spillovers) often leads to greater economic growth and more robust markets.
As a result, Brett’s ideas challenge conventional law and economics thinking in a profound way, but do so based on the premises and tools of economic analysis. They also do so based not on hypothetical markets but on examples of infrastructure that we use every day and can relate to. For those who have come to believe that much of traditional law and economics appears stylized, inaccurately frictionless, and out of sync with reality, Brett’s insights are eye-opening (and refreshing). They point the way forward for economists and policymakers to do better. Hopefully, his ideas influence those influencers.
posted by Deven Desai
One way to think about any work is whether it helps understand a range of problems. Brett Frischmann’s Infrastructure does precisely that. Indeed, when done, I was not satisfied, despite the range of topics he addresses, such as roads and transportation, telecommunications, environmental, and intellectual infrastructures. Like a good novel, I wanted more. The framework and ideas Frischmann set out apply to complex problems society faces today. As a good theory and framework should do, Frischmann’s work digs into several areas and cries out for future work. One example that comes to mind is education.
Education has been a crisis issue in the past several decades; Frischmann’s Infrastructure sorts major questions that I believe society misses in this area. An assumption is that education is a public good. Yet, I think we are straying from how we manage such goods. As education seems to be failing, the United States has turned to market solutions. The somewhat standard cry of government mistakes, lack of competition, etc. fit into one model of managing the issue. Yet, as Frischmann explains, education may be understood as a merit good and as such there is a demand side problem: “systematic undervaluation of the merit good in market settings.” (p. 45) I would add that mistakes in proper valuation of education as infrastructure lead to the creation of an education aristocracy that leads to problems of the so-called One Percent type many decry today (but are unwilling to address with tax reform).
Recent evidence from Finland shows that taking an infrastructural approach leads to outcomes that we would want, but currently fail to reach. As the article describes:
Decades ago, when the Finnish school system was badly in need of reform, the goal of the program that Finland instituted, resulting in so much success today, was never excellence. It was equity. Since the 1980s, the main driver of Finnish education policy has been the idea that every child should have exactly the same opportunity to learn, regardless of family background, income, or geographic location. Education has been seen first and foremost not as a way to produce star performers, but as an instrument to even out social inequality. … When Finnish policymakers decided to reform the country’s education system in the 1970s, they did so because they realized that to be competitive, Finland couldn’t rely on manufacturing or its scant natural resources and instead had to invest in a knowledge-based economy.
This approach strikes me as a good example of how thinking in terms of infrastructure, social goods, and externalities (p. 49) allows us to recognize different ways of managing complex societal issues. Furthermore, this example has data behind it. Finland is persistently in the top of PISA scores on language, math, and science. And if one wishes to say that Finland is homogenous, different than the U.S etc. there is a control. Norway went with the American model and did much less well. As the article points out, education is a state policy in the U.S. Some states are smaller and more homogenous than Finland. I suggest that a truly innovative state program would look at such an investment to allow the state a competitive advantage. California was arguably such a state with its plan for undergraduate education until folks, wait for it, undervalued education and stopped paying for it on the broad basis.
posted by Brett Frischmann
I am incredibly grateful to Danielle, Deven, and Frank for putting this symposium together, to Concurring Opinions for hosting, and to all of the participants for their time and engagement. It is an incredible honor to have my book discussed by such an esteemed group of experts.
Shared infrastructures shape our lives, our relationships with each other, the opportunities we enjoy, and the environment we share. Think for a moment about the basic supporting infrastructures that you rely on daily. Some obvious examples are roads, the Internet, water systems, and the electric power grid, to name just a few. In fact, there are many less obvious examples, such as our shared languages, legal institutions, ideas, and even the atmosphere. We depend heavily on shared infrastructures, yet it is difficult to appreciate how much these resources contribute to our lives because infrastructures are complex and the benefits provided are typically indirect.
The book devotes much-needed attention to understanding how society benefits from infrastructure resources and how management decisions affect a wide variety of private and public interests. It links infrastructure, a particular set of resources defined in terms of the manner in which they create value, with commons, a resource management principle by which a resource is shared within a community.
Infrastructure commons are ubiquitous and essential to our social and economic systems. Yet we take them for granted, and frankly, we are paying the price for our lack of vision and understanding. Our shared infrastructures—the lifeblood of our economy and modern society—are crumbling. We need a more systematic, long-term vision that better accounts for how infrastructure commons contribute to social welfare.
In this book, I try to provide such a vision. The first half of the book is general and not focused on any particular infrastructure resource. It cuts across different resource systems and develops a framework for understanding societal demand for infrastructure resources and the advantages and disadvantages of commons management (by which I mean, managing the infrastructure resource in manner that does not discriminate based on the identity of the user or use). The second half of the book applies the theoretical framework to different types of infrastructure—e.g., transportation, communications, environmental, and intellectual resources—and examines different institutional regimes that implement commons management. It then wades deeply into the contentious “network neutrality” debate and ends with a brief discussion of some other modern debates.
Throughout, I raise a host of ideas and arguments that probably deserve/require more sustained attention, but at 436 pages, I had to exercise some restraint, right? Many of the book’s ideas and arguments are bound to be controversial, and I hope some will inspire others. I look forward to your comments, criticisms, and questions.
April 24, 2012 at 3:05 pm Posted in: Administrative Law, Antitrust, Bright Ideas, Cyberlaw, Economic Analysis of Law, First Amendment, Google & Search Engines, Infrastructure Symposium, Innovation, Intellectual Property, Legal Theory, Media Law, Property Law, Technology, Uncategorized Print This Post No Comments
posted by Frank Pasquale
This week, Concurring Opinions is hosting a symposium on Brett Frischmann’s book, Infrastructure: The Social Value of Shared Resources. Having long followed and enjoyed Brett’s work, Deven and I are honored to organize the discussion.
Viewing foundational environmental and intellectual resources through the infrastructure lens yields interesting insights regarding commons management institutions. In particular, both environmental and intellectual property legal systems construct semi-commons arrangements that create and regulate interdependent private rights and public commons.
The symposium will include contributions from Julie Cohen, Laura Denardis, Andrew Odlyzko, Tim Wu, Marvin Ammori, Timothy B. Lee, Robin Chase, David Isenberg, David Post, Adam Thierer, Rick Whitt, and Barbara A. Cherry.
posted by Deven Desai
Julie Cohen’s Configuring the Networked Self is different and signals that the next era of tech policy is upon us. The explosion of books about the Internet tracks the explosion of, well, the Internet. Could there be a bubble here too? Are most books simply restating and rehashing arguments from years ago? Probably. Cohen’s book, however, points the way to the next questions about not just the Internet, but how we structure the next twenty to forty years of society. She asks that we look at the state of not just networked technology, but the economy, law, and society that has emerged, how we justify it, and what it should look like going forward. Recent work by Barton Beebe, Maggie Chon, Brett Frischmann, Frank Pasquale, Daniel Solove, and Madhavi Sunder, makes me confident that the new era is here and work in it is growing. Rather than staying with the silos of the past fifteen years, this new inquiry looks to how the system works and probes whether society is reaping the benefits at large. Works like Code, The Future of the Internet, and The Wealth of Networks make important contributions to understanding and justifying certain visions of the Internet/Tech society. I believe, however, that the moment for those explorations is waning. Of course the debates regarding IP protection, open Internet, etc. will continue and there are important near-term battles there. The most pressing area for scholarship and society at large is what comes next?
Talk of innovation and what that means is rather staid and redundant. Leave X the way it is or all will cease. No. Stop X or a once shining industry will die (and you won’t get the things you thought you loved). Back and forth the players go. A closer look shows that they are fighting about their piece of the rapid growth pie. No one seems to look at exactly what innovation is at stake (is it breakthrough or tinkering and applying with a major one?), where capital is heading (is it rushing after the heady returns of early stage industries or fueling production and strong, reasonable rates of return), and how the innovation spreads wealth across society (are the benefits starting to reshape so many industries that a second wave of returns and improvements revitalizes older industries such that the middle class grows?). No one, except, Carlotta Perez and her contemporaries. They investigate the Schumpeter model but go further. Perez makes the strong case that after a technology reaches a peak, there is a crash (or two), and then the real action begins. Society must look to regulation and other mechanisms so that the true golden age arrives, one where the tech wealth spreads and production capital is the order of the day. Note that while that happens the next big tech breakthrough is likely lurking in a lab somewhere and waiting to pop out and shift the world once more.
Cohen’s book comes at the peak of the tech revolution roughly started in 1971 with the birth of the microprocessor, and is a vital resource for the turning point at which we are. I suggest that Cohen and the new wave of tech scholars looking to Sen and Nussbaum for a capabilities approach to tech policy and/or questioning a purely market-based analysis of the issues, may be understood as demanding that we get our house in order. When Cohen calls out that privacy and copyright suffer from similar conceptual problems and argues for a new way to see how individuals’ capabilities can be enhanced, she offers a claim about how to turn the tech revolution from benefiting a small, centralized few to improving the lot of the many. Perez admits that each tech cycle has somewhat specific logics and solutions. Cohen’s situated user, her critique of the specific financial system and call for sustainable development, and acknowledgment of the messy nature of culture track Perez’s insights. In each previous revolution, the turning point arrived and society constructed the way forward that accounted for the specifics of the technology as a broad matter for individuals, addressed failures in capital and labor markets, and was subject to certain cultural and political realities of the time. Configuring the Networked Self is a serious volley against remaining stuck in the recent past. In it, Cohen demands that we look to hard questions and honest insights about the system at large. She is not complacent about the future either. Instead, she makes a case for how we can and should proceed. Like all good scholarship, the book offers ideas to be tested and new questions to pursue. So read the book and let’s get to work.
These are my views. Not Google’s. In other words, attribution to my employer is foolish.
posted by Stanford Law Review
The Stanford Law Review Online has just published an Essay by Richard A. Epstein entitled Physical and Regulatory Takings: One Distinction Too Many. In light of Harmon v. Kimmel—a case challenging New York’s rent control statute on petition to the Supreme Court—Epstein provides a succinct economic takedown of uncompensated regulatory takings in four distinct areas: rent control, support easements, zoning, and landmark preservation statutes. In suggesting a unified approach to eminent domain whether the taking is physical or regulatory, he writes:
Unfortunately, modern takings law is in vast disarray because the Supreme Court deals incorrectly with divided interests under the Takings Clause of the Fifth Amendment, which reads: “nor shall private property be taken for public use, without just compensation.” The Supreme Court’s regnant distinction in this area is between physical and regulatory takings. In a physical taking, the government, or some private party authorized by the government, occupies private land in whole or in part. In the case of a per se physical taking, the government must pay the landowner full compensation for the value of the land occupied. Regulatory takings, in contrast, leave landowners in possession, but subject them to restrictions on the ability to use, develop, or dispose of the land. Under current law, regulatory takings are only compensable when the government cannot show some social justification, broadly conceived, for its imposition.
Thus, under current takings law, a physical occupation with trivial economic consequences gets full compensation. In contrast, major regulatory initiatives rarely require a penny in compensation for millions of dollars in economic losses. . . .
The judicial application of takings law to these four different partial interests in land thus destroys the social value created by private transactions that create multiple interests in land. The unprincipled line between occupation and regulation is then quickly manipulated to put rent control, mineral rights, and air rights in the wrong category, where the weak level of protection against regulatory takings encourages excessive government activity. The entire package lets complex legal rules generate the high administrative costs needed to run an indefensible and wasteful system. There are no partial measures that can fix this level of disarray. There is no intellectual warrant for making the categorical distinction between physical and regulatory takings, so that distinction should be abolished. A unified framework should be applied to both cases, where in each case the key question is whether the compensation afforded equals or exceeds the value of the property interest taken. The greatest virtue of this distinction lies not in how it resolves individual cases before the courts. Rather, it lies in blocking the adoption of multiple, mischievous initiatives that should not have been enacted into law in the first place. But in the interim, much work remains to be done. A much-needed first step down that road depends on the Supreme Court granting certiorari in Harmon v. Kimmel.
Read the full article, Physical and Regulatory Takings: One Distinction Too Many by Richard A. Epstein, at the Stanford Law Review Online.
posted by Samir Chopra
Andrea Matwyshyn’s reading of the agency analysis of contracting (offered in A Legal Theory for Autonomous Artificial Agents and also available at SSRN) is very rigorous and raises some very interesting questions. I thank her for her careful and attentive reading of the analysis and will try and do my best to respond to her concerns here. The doctrinal challenges that Andrea raises are serious and substantive for the extension and viability of our doctrine. As I note below, accommodating some of her concerns is the perfect next step.
At the outset, I should state what some of our motivations were for adopting agency doctrine for artificial agents in contracting scenarios (these helped inform the economic incentivizing argument for maintaining some separation between artificial agents and their creators or their deployers.
[A]pplying agency doctrine to artificial agents would permit the legal system to distinguish clearly between the operator of the agent i.e., the person making the technical arrangements for the agent’s operations, and the user of the agent, i.e., the principal on whose behalf the agent is operating in relation to a particular transaction.
Embracing agency doctrine would also allow a clear distinction to be drawn between the authority of the agent to bind the principal and the instructions given to the agent by its operator.
Third, an implicit, unstated economic incentive.
February 19, 2012 at 2:10 pm Tags: A Legal Theory for Autonomous Artificial Agents, artificial agents Posted in: Contract Law & Beyond, Cyberlaw, Economic Analysis of Law, Legal Theory, Symposium (Autonomous Artificial Agents), Technology, Tort Law Print This Post No Comments
posted by Dave Hoffman
Alessandro Acquisti, Sasha Romanosky, and I have a new draft up on SSRN, Empirical Analysis of Data Breach Litigation. Sasha, who’s really led the charge on this paper, has presented it at many venues, but this draft is much improved (and is the first public version). From the abstract:
In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff’s chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.
A few thoughts follow after the jump.
February 19, 2012 at 1:33 pm Posted in: Economic Analysis of Law, Empirical Analysis of Law, Privacy, Privacy (Consumer Privacy), Privacy (ID Theft), Privacy (Law Enforcement), Privacy (Medical) Print This Post No Comments
posted by Dave Hoffman
President Obama’s State of the Union glossed on a topic that’s quite relevant to the recent debates about legal education:
“Of course, it’s not enough for us to increase student aid. We can’t just keep subsidizing skyrocketing tuition; we’ll run out of money. States also need to do their part, by making higher education a higher priority in their budgets. And colleges and universities have to do their part by working to keep costs down. Recently, I spoke with a group of college presidents who’ve done just that. Some schools re-design courses to help students finish more quickly. Some use better technology. The point is, it’s possible. So let me put colleges and universities on notice: If you can’t stop tuition from going up, the funding you get from taxpayers will go down. Higher education can’t be a luxury— it’s an economic imperative that every family in America should be able to afford.”
As political pap goes, this is as good as any. But I’d go a step further to ask how the government could help keep down costs, apart from threatening to take away subsidies. Costs have many drivers, including rising student demand for particular kinds of campus amenities, legacy benefit costs that plague all large-scale employers, and rising health costs. But the biggest factor is faculty salaries. Given tenure (which affects law schools disproportionately because of our accreditor’s monopoly) it might seem like this is a wicked problem. Maybe it is, but the President could have called for the Congress to make a small change in law that might make a real difference: repeal that portion of the ADEA which prohibits mandatory retirement ages for university professors.
As is well-known, the federal government prohibits mandatory retirement policies except when age is a bona fide occapational requirement or when the person is a qualifying executive. 29 U.S.C. §§623(f), 631(c). An exception for tenured employees, including professors, was phased out in 1993. (The law phasing out the exception passed in 1986). As this study predicted, the impact on research universities in particular is severe, as an increasingly high percentage of workers stay on the job after age 70. Why does this matter? If teaching and/or scholarship decreases after many years on the job – and there is some evidence that they do – universities have few remedies given tenurial job protections for under performing employees. In today’s economy, with an increasingly volatile stock market, and unpalatable health insurance choices, we’d probably also expect that fewer faculty will retire voluntarily in the future than they used to. Thus, many institutions will find it hard to reduce costs by reducing faculty sizes (or paying less per person by replacing older, more expensive, employees with younger, cheaper, ones.) We will deliver fewer educational goods, at higher costs.
Now there are good reasons for prohibiting mandatory retirement in general. But I’ve never understood why those reasons translate when you’ve got a tenured faculty who often exercise more self-government than law firm partners. In any event, given the economic realities of the moment, lumping faculty in with other workers feels like a luxury students can no longer afford.
posted by Deven Desai
Back in October, Valve co-founder Gabe Newell explained the economics of video games as his company sees it. The Geekwire article is worth the read. For now, I’ll point out that he admits “We don’t understand what’s going on” and uses the language of co-creation of value, which I happen to believe is the current future as it were, to describe what the company is doing:
This is probably the biggest change that’s affected the gaming business over the last few years. It’s not just that we have digital distribution to our customers. It’s that we have this incredible two-way connection that we’ve never had before with our customers.
We’ve gone from a situation where we dream up a game, we spend three years making it, we put it in a box, we put it out in stores, we hope it sells, to a situation that’s incredibly more fluid and dynamic, where we’re constantly modifying the game with the participation of the customers themselves
The comments on piracy comport with insights from other industries:
One thing that we have learned is that piracy is not a pricing issue. It’s a service issue. The easiest way to stop piracy is not by putting antipiracy technology to work. It’s by giving those people a service that’s better than what they’re receiving from the pirates. For example, Russia. You say, oh, we’re going to enter Russia, people say, you’re doomed, they’ll pirate everything in Russia. Russia now outside of Germany is our largest continental European market. … the people who are telling you that Russians pirate everything are the people who wait six months to localize their product into Russia. … So that, as far as we’re concerned, is asked and answered. It doesn’t take much in terms of providing a better service to make pirates a non-issue.
The information on pricing is really cool. “[W]e varied the price of one of our products. We have Steam so we can watch user behavior in real time. That gives us a useful tool for making experiments which you can’t really do through a lot of other distribution mechanisms. What we saw was that pricing was perfectly elastic. In other words, our gross revenue would remain constant. We thought, hooray, we understand this really well. There’s no way to use price to increase or decrease the size of your business.”
Yet he goes on to describe how sales such as a 75% price reduction lead to a “gross revenue increased by a factor of 40.” They tested against a product they did not own and saw similar results. Then they tested free. It turns out free to play and and free work differently. His thought is that the user base matters because they value the products differently including “what the statement that something is free to play implies about the future value of the experience that they’re going to have.”
Furthermore, conversion rates shift too. Free to play often “see[s] about a 2 to 3 percent conversion rate of the people in their audience who actually buy something, and then with Team Fortress 2, which looks more like Arkham Asylum in terms of the user profile and the content, we see about a 20 to 30 percent conversion rate of people who are playing those games who buy something.”
What do all these tests mean? As Newell said, it’s unclear. That is why I could see some rather cool studies being done for this emerging area.
posted by Deven Desai
OK, that title is a riff on a line from The Player. I loved it when the film came out and still do. It says so much of nothing, but captures a vibe that persists. Yet again it seems the film industry is in trouble, or rather doldrums. The Times reports that this year’s box office was a bit off from last year. Another favorite film industry (and maybe true for all content industry) is “Nobody knows anything.”) So as the article notes “Movies are a cyclical business” and last year’s numbers may have hangovers from the previous year’s Avatar release. Then again the prices have gone up and attendance is down so there may be a real drop in the industry. There are some better answers in the article than other wrap up stories I recall reading as a kid growing up in L.A. and devouring the Calendar section of the L.A. Times when it was good.
For example as the NY Times puts it:
What has gone wrong? Plenty, say studio distribution executives, who point to competition for leisure dollars, particularly among financially pressed young people (the movie industry’s most coveted demographic); too many family movies; and the continued erosion of star power.
One more thing: “You have to go back and look at the content,” said Dan Fellman, president of domestic distribution for Warner Brothers. “Good movies always rise to the occasion. Bad ones, not so much.”
In the immortal words of Keanu Reeves, “Whoa.” Studios admitting that they compete for leisure dollars? Acknowledgement that star power is not that powerful? Furthermore, the article notes that consumers use social media and the Internet to sort rubbish copycat films from good ones” Per the Times, Phil Contrino, editor of BoxOffice.com, offers, “Because they have less disposable income and because they are more plugged in to audience reaction on Facebook and Twitter, the teenage audience is becoming picky,” he added. “That’s a nightmare for studios that are used to pushing lowest-common-denominator films.” Now let’s throw in video games. Call of Duty did $400 million dollars in its first day of sales.
In sum, the youth audience does not have huge amounts to spend and if choosing between a film that seems unoriginal and a video game, the video game often wins. And despite some odd spin about films aimed at older audiences doing well, the article also explains star vehicles aimed at older audiences failed which seems to go back to make a good movie and people are more likely to see it in the theater.
Will sequels and re-releases in 3D draw me to the theater? Yes (damn you Lucas and your 3d Star Wars ploy!)!! But would it help if there were really good new stories? Heck yeah!
For an odd closing, I offer that economists and academics in law could do well to study the way leisure dollars are spent, the demographics of the content industries, and way that some digital industries thrive while others claim to flounder. Then again, maybe nobody knows anything.
posted by Dave Hoffman
The NYT isn’t entirely worthless. There’s a cute technology piece up on how irritated the reporter and his friends-on-the-street are by people who talk to their iPhone’s Siri when they could just as easily text. As the Times puts it, this is a problem of unfelt externalities:
“James E. Katz, director of the Center for Mobile Communication Studies at Rutgers, said people who use their voices to control their phones are creating an inconvenience for others — noise — rather than coping with an inconvenience for themselves — the discomfort of having to type slowly on a cramped cellphone keyboard. Mr. Katz compared the behavior with that of someone who leaves a car’s engine running while parked, creating noise and fumes for people surrounding them.”
The piece goes onto claim that eventually, we’re get used to this noise pollution. Perhaps we will! But if we don’t, there are options other than anti-nuisance regulation. After all, there are competing rights here: the right to speak so you don’t have to confront your inability to text without typos and the right not to hear what the person next to you on the subway wants for dinner. Now, we could ban Siri-like Apps in public places. But, as all good Coasians know, there’s another option. We could decide that the Siri-ans should have the right to speak wherever they are: irritated hearers can simply pay the offending speaker not to talk into their iPhone in public. In fact, I wonder if Apple could perhaps make an App for that. Call it the “Shut Down Nearby Siris For Five Minutes Auction App.” People could list the price at which they’d agree to be paid to be silenced; irritated listeners could either pay that price or bid at a lower rate. If hearers and speakers matched, we’d achieve (in the Article’s words) the socially efficient outcome: back to the “old days when people just texted in public.”
posted by Dave Hoffman
Via Andrew Sullivan comes this nice figure illustrating the effect of a car driver’s care level on pedestrians. It might have come directly from the chapter on the economics of tort law in Mitch Polinsky’s famous An Introduction to Law and Economics. The chart’s author, presumably unlike most mainstream law and economists, argues that local governments ought to be permitted more freedom to regulate driver speed (as opposed to letting the level vary ex post through a liability regime.) I think it might be a better example of the importance of regulations requiring sidewalks.