Author: Brett Frischmann

0

Thoughts on Driesen’s The Economic Dynamics of Law

David Driesen’s book, The Economic Dynamics of Law, offers a powerful new approach to law and policy analysis.  Like many others, Professor Driesen critiques neoclassical law and economics and the application of conventional cost-benefit analysis (CBA) to various areas of law and policy.  Unlike most others, however, Professor Driesen develops an alternative.

Professor Driesen emphasizes a host of broad framing points, the implications of which are not fully understood, generally and especially within conventional law and economics.  I take the following points to be, for the most part, uncontroversial (even if their implications are not fully understood).  Most people will agree that we live in an incredibly complex, dynamic world consisting of many interdependent, complex evolving systems; that law shapes these systems and critically how these systems change or evolve over time; that path dependencies make some changes irreversible and others incredibly costly to unwind; that law is necessarily normative as are the path setting consequences of law; that law operates as a framework that shapes but does not fully determine what people do.

The implications of these framing points demand serious attention, however, because they are too easily misunderstood or simply assumed away to make analysis tractable.  For example, the implications of the fact that preferences are endogenous and that law and the systems structured by law shape preferences are not fully accounted for in law and economics.  It is admittedly difficult to take such complications into account, and so the more tractable move is to assume preferences are exogenous and that law’s objective is efficient satisfaction of existing preferences.  Professor Driesen explains the errors in such a move.  Tractability is a poor excuse for failing to engage with reality and the normative stakes of law’s dynamics.  The fact that law shapes preferences and beliefs means that we cannot avoid confronting questions about how law shapes who we are and who we can even contemplate being.

Professor Driesen thus places analytical emphasis on law’s role in setting paths or choosing directions for society rather than determining outcomes or optimizing resource allocations.  He advances two broad normative commitments — avoiding systemic risk and providing opportunities for economic development.  He defines each and develops means for analyzing them that goes beyond conventional CBA.  As others have commented on the relationship of his approach and CBA, I’ll leave that aside.  With regard to systemic risk, I had two questions for Professor Driesen:  First, how would he deal with intergenerational issues?  He touches on CBA’s use of discount rates in the climate change context and how “CBA’s results depend on the policy views of the economist conducting the analysis,” but I didn’t fully understand what alternative he offered.  Second, what about systemic benefits?  Simply put, I wondered whether there is a symmetrical point to be made about systemic benefits.  I discuss related issues in my book, Infrastructure:  The Social Value of Shared Resources (Co-Op symposium), and connect the commitment to the idea of a social option, but it also ties into North’s adaptive efficiency argument, which Professor Driesen discusses.  Systemic benefits may be a broader way to think about his second normative commitment concerning opportunities for economic development, but it is hard to say because that commitment gets much less attention in the book.  Perhaps opportunities for economic development should be extended to include human development and Driesen’s approach could incorporate some of the ideas and lessons from Sen’s Capabilities Approach.  Certainly, many of the framing points noted above are also central to the CA project.

I was a little disappointed that the second normative commitment received less attention.  Much of the law is focused on opportunities for (human and) economic development.  Many of the applied chapters (e.g., contract, property, IP) seem to focus on it, but those chapters seemed mostly descriptive and backwards looking, with Professor Driesen saying something like, “Hey, wait a minute!  What’s really happening in these areas is dynamic change over time, with bounded rationality, …, it’s not classic law and econ!”  I would like to see more analysis of how Professor Driesen’s approach could better reconcile these areas of law with the second normative commitment he identified.

On IP, let me just say that I agree with Professor Driesen – IP scholars certainly think a lot about dynamic change.  He is right that we need to pay much more attention to path setting and how IP laws, for better or worse, shape the paths available and the paths taken.  This was a theme I explored in Intellectual Infrastructure, chapter 12 of my book.  In fact, many IP scholars are now working on this subject.

Let me end with a brief cautionary note on Professor Driesen’s appeal to macroeconomics.  I agree with him that legal scholars who employ economics tend to rely heavily on microeconomics and ignore macroeconomics.  He is also correct, in my view, when he suggest that overreliance on microeconomics, or at least certain aspects of it, has often sustained unrealistic assumptions, ideological commitments (sometimes hidden beneath the veneer of objectivity), and bad results.  I would only caution Professor Driesen that the same might be said of macroeconomics.

0

The benefits of being free

I applaud Orly for this excellent contribution.  There is much to praise and much to comment on.  I was particularly attracted to the interdisciplinary perspective of the book and its heavy reliance on and reporting of studies in economics, psychology, and other literatures—including but not limited to Orly’s original research.  The book provides an excellent discussion of various dimensions of innovation studies.  It also provides compelling descriptions of many different real world contexts where the lessons from the academic studies play out on the ground.  The combination is quite amazing.  I also think it is quite important that she focused attention on people, and the human, social and intellectual capital that actually drives innovation across sectors.  Too often, innovation studies lose sight of the actual people involved.  Orly’s book covers so much ground and connects with various topics I’m also interested in.  It is difficult for me to pick a particular topic of theme to comment on in this blog post.  (I’m tempted, for example, to push her to say more about technology transfer offices at universities and how they’ve evolved over time in terms of their approach to control.  I also would like to hear *much* more about the application of commons governance ideas.)  Instead, I’ll say something about the broad ambition of the book.

Orly presents the book as new wisdom – a “dynamic model” — to challenge conventional wisdom – the “orthodox model” – about the necessity of strict control over talented employees, ideas, and various other complementary resources that drive creative and innovative progress and economic growth.  I think the book does a wonderful job of pointing out the many ways in which theoretical and empirical work across many fields of inquiry combine to challenge if not completely undermine the conventional wisdom.  Controls on the flow of ideas and employees often backfire and are costly to the firm and the public.  Orly describes very well the substantial benefits – benefits all too often ignored or assumed away – in sustaining the freedom to operate, to move, to experiment, to tinker, and so on.  She effectively makes the case for a much more nuanced approach to thinking about innovation and the various ways in which freedom (to operate, to move, to think, to experiment, to ride, etc.) impact innovation and social welfare more generally.

That said, I don’t think the book supplies a fully formed alternative vision, theory or model about what degree of control/freedom may be needed to sustain innovation.  The Goldilocks nature of the problem, which Orly describes, surfaces throughout, and it is hard to know where or how to strike the right balances as a matter of public policy (law) and private strategy (corporate practice).  The book at times seems to suggest that it will offer a solution or that the solution might be absolute freedom / no control.  But that is not really what the book ends up saying, as I understand it.  In the end, we remain stuck with the problem of nuance and variety and context- or industry-specific balancing.  Frankly, I don’t think this is a bad result at all; it’s probably where we need to be if we’re basing our judgments in reality.

For some reason, I was surprised when the book ended.  I wanted more.  I expected more.  In a sense, this is a good thing because the book provoked me to think about and look for more.  But I wonder whether the final part of the book could have tied the themes together a bit more tightly and at least proposed a research agenda for developing a more nuanced approach to innovation.  Many of the pieces of the puzzle are in Orly’s book.  But the puzzle remains incomplete.

2

The path toward an alternative consequentialist framework for IP and related fields of law that affect social and cultural life

Madhavi Sunder’s From Goods to a Good Life is an important book.  It contributes in significant ways to understanding our complex relationships with each other and the world we live in, share, and construct.  I especially love her use of vivid, real examples throughout the book.  To do this well involves a special skill and a willingness to dig into facts and contexts that are too easy to ignore for the sake of convenience or because the facts do not fit neatly into pre-existing models. 

As others have noted, the book fits nicely in a rich stream of scholarship that challenges narrow economic frameworks and a host of unwarranted assumptions about culture – including the distorting but common assumption of a fixed culture in the “background.” Although previous commentators have mostly mentioned scholars in the stream who have developed external critiques that challenge narrow economic frameworks from outside economics, there are many internal critiques as well.  I mention this in part because my recent book falls into this category, and as I read From Goods to a Good Life I kept seeing interesting convergences in our ideas.  But more importantly, I mention the internal critiques because in following Amartya Sen, Sunder does not reject economics or an economic framework; rather, she expressly seeks to develop and defend a broader economic framework that better reflects reality and accommodates a broader range of values. 

The book does not offer a framework that is set to compete with the economic framework she seeks to displace.  It is not quite there yet.  Drawing on Martha Nussbaum, Amarya Sen and others, Sunder articulates some features of such a framework.  Critically, she justifies deeper interrogation and rigorous development of an alternative consequentialist framework for IP and related fields of law and policy that affect social and cultural life.  These are important contributions.  Much like Julie Cohen and others in the stream (including me), Sunder has turned to the Capabilities Approach (CA) as a source of normative guidance and the roots of an analytical framework.  But much work remains to be done.  The CA helps us to conceive, recognize, and analyze normative values that are at stake in IP and related fields of law and policy that affect social and cultural life.  To develop an alternative consequentialist framework, however, we need to see more on how the CA helps us to evaluate various commitments, how to prioritize them, how adjusting the “ends” impacts our understanding and design of the “means,” how to implement and operationalize the CA in the cultural environment, and even how the CA intersects and interacts with the welfare economic framework in these fields.  Sunder examines many of these issues in specific contexts (from fair use to essential medicines), and this is important.  But I suspect that going down the CA path could lead to profound and systematic changes to IP and other related legal regimes. 

Madhavi Sunder’s From Goods to a Good Life has encouraged me to explore this path in future work, and I hope others will do so too.  I am grateful to Sunder for paving the way and illustrating the urgency.

0

Recognizing the Limits of Models and Empirics

In the book, I stress the limits of mathematical models and quantitative data in the infrastructure context because the models and data tend to be partial and distort by omission. The following footnote in the Conclusion captures my concern:

Economists strongly prefer to work with formal mathematical models and quantitative data, for good reasons, but this preference introduces considerable limitations. Among other things, this preference leads many economists to isolate a particular market or two to analyze, holding others constant and assuming them to be complete and competitive. This approach is highly distorting in the infrastructure context because infrastructure resources are often foundations for complex systems of many interdependent markets (complete and incomplete) and nonmarket systems. Economists may cordon off various nonmarket systems and corresponding social values because such phenomena are deemed to be outside the bounds of economics. (Recall the discussion in chapter 3 about such boundaries.) But to focus on markets and their interactions and ignore nonmarkets and relevant social values distorts the analysis of infrastructure, whether or not we label the analysis “economic” because it is within the conventional bounds of the discipline. Of course, many economists are well aware of these boundaries and the corresponding limits of their expertise and policy prescriptions. Nonetheless, these limits often are not apparent or well understood by policy makers and other consumers of economic analyses, and even when the limits are understood, there are various reasons why they may be disregarded — for example, ideology or political pressures.

J. Scott Holladay, an environmental economist, explained to me:

When conducting an economic valuation of an ecosystem, we are well aware of our limitations. In a valuation study, we identify environmental services and amenities that are valuable but cannot be valued via existing economic methods, and we may assign a non-numerical value to make clear that we are not assigning a value of zero, but when the valuation study is used by policy makers, those non-numerical values may effectively be converted to a zero value and the identified environmental services and amenities truncated from the analysis. Is that a fault of the economist or the policy maker?

To be clear, I do not assign fault to anyone. Rather, my aim is to examine the consequences of reductionism and shed light on the importance of what is often ignored (or truncated).

Now that the book is in print, I have gone back to this point—expressed in this footnote and elsewhere in the book—and wondered whether this will be something that readers find frustrating or illuminating. I have also started to puzzle about what to do about the problem, whether / how to develop better models and gather more and better data, etc. Any thoughts?

1

If Infrastructure, then Commons: an analytical framework, not a rule

It is probably worth making it clear that, as I state multiple times in the book, my argument is not “if infrastructure, then commons.” Rather, I argue that if a resource is infrastructure—defined according to functional economic criteria I set forth in the book, then there are a series of considerations one must evaluate in deciding whether or not to manage the resource as a commons. Chapter four provides a detailed analysis of what resources are infrastructure, and chapter five provides a detailed analysis of the advantages and disadvantages of commons management from the perspective of private infrastructure owner (private strategy) and from the perspective of the public (public strategy). Chapters six, seven and eight examine significant complicating factors/costs and arguments against commons management.

After reviewing the excellent posts, it occurred to me that blog readers might come away with the mistaken impression that in the book I argue that the demand side always trumps the supply side or that classifying a resource as infrastructure automatically leads to commons management. That is certainly not the case. I do argue that the demand-side analysis of infrastructure identifies and helps us to better appreciate and understand a significant weight on one side of the scale, and frankly, a weight that is often completely ignored.  Ultimately, the magnitude of the weight and relevant counterweights will vary with the infrastructure under analysis and the context.

In chapter thirteen, I argue that the case for network neutrality regulation—commons management as a public strategy applied in the context of Internet infrastructure—would remain strong even if markets were competitive. In his post, Tim disagreed with this position.  In Tim’s view, competition should be enough to sustain an open Internet, for a few reasons, but mainly because consumers will appreciate (some of) the spillovers that are produced online and will be willing to pay for (and switch to) an open infrastructure, provided that competition supplies options. I replied to his post with some reasons why I disagree. In essence, I pointed out that consumers would not appreciate all of the relevant spillovers because many spillovers spill off-network and thus private demand would still fall short of social demand, and I also noted that I was less confident about his predictions about what consumers would want and how they would react. (My disagreement with Tim about the relevance of competition in the network neutrality context should not be read to mean that competition is unimportant. The point is that the demand-side market failures are not cured by competition, just as the market failures associated with environmental pollution are not cured by competition.)

In my view, the demand side case for an open, nondiscriminatory Internet infrastructure as a matter of public strategy/regulation is strong, and would remain strong even if infrastructure markets were competitive. But as I say at the end of chapter thirteen, it is not dispositive. Here is how I conclude that chapter:

 My objective in this chapter has not been to make a dispositive case for network neutrality regulation. My objective has been to demonstrate how the infrastructure analysis, with its focus on demand-side issues and the function of commons management, reframes the debate, weights the scale in favor of sustaining end-to-end architecture and an open infrastructure, points toward a particular rule, and encourages a comparative analysis of various solutions to congestion and supply-side problems. I acknowledge that there are competing considerations and interests to balance, and I acknowledge that quantifying the weight on the scale is difficult, if not impossible. Nonetheless, I maintain that the weight is substantial. The social value attributable to a mixed Internet infrastructure is immense even if immeasurable. The basic capabilities the infrastructure provides, the public and social goods produced by users, and the transformations occurring on and off the meta-network are all indicative of such value.

 

0

Introduction: Symposium on Infrastructure: the Social Value of Shared Resources

I am incredibly grateful to Danielle, Deven, and Frank for putting this symposium together, to Concurring Opinions for hosting, and to all of the participants for their time and engagement. It is an incredible honor to have my book discussed by such an esteemed group of experts. 

The book is described here (OUP site) and here (Amazon). The Introduction and Table of Contents are available here.

Abstract:

Shared infrastructures shape our lives, our relationships with each other, the opportunities we enjoy, and the environment we share. Think for a moment about the basic supporting infrastructures that you rely on daily. Some obvious examples are roads, the Internet, water systems, and the electric power grid, to name just a few. In fact, there are many less obvious examples, such as our shared languages, legal institutions, ideas, and even the atmosphere. We depend heavily on shared infrastructures, yet it is difficult to appreciate how much these resources contribute to our lives because infrastructures are complex and the benefits provided are typically indirect.

The book devotes much-needed attention to understanding how society benefits from infrastructure resources and how management decisions affect a wide variety of private and public interests. It links infrastructure, a particular set of resources defined in terms of the manner in which they create value, with commons, a resource management principle by which a resource is shared within a community.

Infrastructure commons are ubiquitous and essential to our social and economic systems. Yet we take them for granted, and frankly, we are paying the price for our lack of vision and understanding. Our shared infrastructures—the lifeblood of our economy and modern society—are crumbling. We need a more systematic, long-term vision that better accounts for how infrastructure commons contribute to social welfare.

In this book, I try to provide such a vision. The first half of the book is general and not focused on any particular infrastructure resource. It cuts across different resource systems and develops a framework for understanding societal demand for infrastructure resources and the advantages and disadvantages of commons management (by which I mean, managing the infrastructure resource in manner that does not discriminate based on the identity of the user or use). The second half of the book applies the theoretical framework to different types of infrastructure—e.g., transportation, communications, environmental, and intellectual resources—and examines different institutional regimes that implement commons management. It then wades deeply into the contentious “network neutrality” debate and ends with a brief discussion of some other modern debates.

Throughout, I raise a host of ideas and arguments that probably deserve/require more sustained attention, but at 436 pages, I had to exercise some restraint, right? Many of the book’s ideas and arguments are bound to be controversial, and I hope some will inspire others. I look forward to your comments, criticisms, and questions.

1

Some thoughts on Cohen’s Configuring the Networked Self: Law, Code, and the Play of Everyday Practice

Julie Cohen’s book is fantastic.  Unfortunately, I am late to join the symposium, but it has been a pleasure playing catch up with the previous posts.  Reading over the exchanges thus far has been a treat and a learning experience.  Like Ian Kerr, I felt myself reflecting on my own commitments and scholarship.  This is really one of the great virtues of the book.  To prepare to write something for the blog symposium, I reread portions of the book a second time; maybe a third time, since I have read many of the law review articles upon which the book is based.  And frankly, each time I read Julie’s scholarship I am forced to think deeply about my own methodology, commitments, theoretical orientation, and myopias. Julie’s critical analysis of legal and policy scholarship, debate,and rhetoric is unyielding as it cuts to the core commitments and often unstated assumptions that I (we) take for granted.

I share many of the same concerns as Julie about information law and policy (and I reach similar prescriptions too), and yet I approach them from a very different perspective, one that is heavily influenced by economics.  Reading her book challenged me to confront my own perspective critically.  Do I share the commitments and methodological infirmities of the neoliberal economists she lambasts?     Upon reflection, I don’t think so.  The reason is that not all of economics boils down to reductionist models that aim to tally up quantifiable costs and benefits. I agree wholeheartdly with Julie that economic models of copyright (or creativty,  innovation, or privacy) that purport to accurately sum up relevant benefits and costs and fully capture the complexity of cultural practices are inevitably, fundamentally flawed and that uncritical reliance on such models to formulate policy is distorting and biased toward seemless micromanagement and control. As she argues in her book, reliance on such models “focuses on what is known (or assumed) about benefits and costs, … [and] tends to crowd out the unknown and unpredictable, with the result that play remains a peripheral consideration, when it should be central.”  Interestingly, I make nearly the same argument in my book, although my argument is grounded in economic theory and my focus is on user activities that generate public and social goods.  I need to think more about the connections between her concept of play and the user activities I  examine.  But a key shared concept is that indeterminacy in the environment and the structure of rights and affordances sustains user capabilties and this is (might be) normatively attractive whether or not users choose to exercise the capabilities.  That is, there is social (option) value is sustaining flexibility and uncertainty.

Like Julie, I have been drawn to the Capabilities Approach (CA). It provides a normatively appealing framework for thinking about what matters in information policy—that is, for articulating ends.  But it seems to pay insufficient attention to the means.  I have done some limited work on the CA and information policy and hope to do more in the future.  Julie has provided an incredible roadmap.  In chapter 9, The Structural Conditions of Human Flourishing, she goes beyond the identification of capabilities to prioritize and examines the means for enabling capabilities.  In my view, this is a major contribution.  Specifically, she discusses three structural conditions for human flourishing: (1) access to knowledge, (2) operational transparency,and (3) semantic discontinuity to be a major contribution.  I don’t have much to say about the access to knowledge and operational transparency discussions, other than “yep.”  The semantic discontinuity discussion left me wanting more, more explanation of the concept and more explanation of how to operationalize it.  I wanted more because I think it is spot on.  Paul and others have already discussed this, so I will not repeat what they’ve said.  But, riffing off of Paul’s post, I wonder whether it is a mistake to conceptualize semantic discontinuity as “gaps” and ask privacy, copyright, and other laws to widen the gaps.  I wonder whether the “space” of semantic discontinuities is better conceptualized as the default or background environment rather than the exceptional “gap.”  Maybe this depends on the context or legal structure, but I think the relevant semantic discontinuities where play flourishes, our everyday social and cultural experiences, are and should be the norm.  (Is the public domain merely a gap in copyright law?  Or is copyright law a gap in the public domain?)  Baselines matter.  If the gap metaphor is still appealing, perhaps it would be better to describe them as gulfs.

1

One more principle: Nondiscrimination

There is one principle that I would add to the five that Marvin examines in the article:  nondiscrimination.  It seems to me that across public and private, physical and virtual “space” contexts (and judicial opinions), one persistent principle is that nondiscriminatory approaches to sustaining spaces, platforms, … infrastructures are presumptively legit and normatively attractive — whether government efforts to “sustain” involve public provisioning, subsidization or regulation.

I recognize that this might seem to tread too close to the negative liberty / anti-censorship model, but in my view, it helps connect the anti-censorship model with the pro-architecture model.  We should worry when government micro-manages speech and chooses winners and losers, but macro-managing/structuring the speech environment is unavoidable.  A nondiscrimination principle guides the latter (macro-management) to avoid the former (micro-management).

This sixth principle is implicit is the other five that Marvin discusses.  It’s not articulated as a stand-alone principle, uniform across situations, or even defined completely.  Nonetheless, nondiscrimination of *some* sort is part of the spatial analysis for each principle. For example, in the paper, when Marvin discusses designated public spaces, he says that government can designate spaces–so long as it does so in a nondiscriminatory way. The nondiscrimination principle here is limited: government cannot discriminate based on the limited notion of “content.”  Another example is limited public forums where government cannot discriminate on viewpoint, but can set aside a forum for particular speakers based on the expected content (say students / educational content).  There are other examples that Marvin explores in the paper.  In my view, there is something fundamental about nondiscrimnation and the functional role that it plays that warrants further attention.

Frankly, the idea of a nondiscrimination principle connects with my own ideas about the First Amendment being aimed at sustaining infrastructure commons and the many different types of spillovers from speech–or more broadly, sustaining a spillover-rich cultural environment;  I explored those ideas in an essay and I expand on them in the book.   It is important to make clear that government support for infrastructure commons — whether by direct provisioning or by common carrier style regulation — lessens pressure on both governments and markets to pick winners and losers in the speech marketplace/environment, and as Marvin argues, that is something that is and ought to be fundamental or core in any FA model.

0

Thoughts on Ammori’s Free Speech Architecture and the Golan decision

Thank you to Marvin for an excellent article to read and discuss, and thank you Concurring Opinions for providing a public forum for our discussion.

In the article, the critical approach that Marvin takes to challenge the “standard” model of the First Amendment is really interesting. He claims that the standard model of the First Amendment focuses on preserving speakers’ freedom by restricting government action and leaves any affirmative obligations for government to sustain open public spaces to a patchwork of exceptions lacking any coherent theory or principles. A significant consequence of this model is that open public spaces for speech—I want to substitute “infrastructure” for “spaces”–are marginalized and taken for granted. My forthcoming book—Infrastructure: The Social Value of Shared Resources–explains why such marginalization occurs in this and various other contexts and develops a theory to support the exceptions. But I’ll leave those thoughts aside for now and perhaps explore them in another post. And I’ll leave it to the First Amendment scholars to debate Marvin’s claim about what is the standard model for the First Amendment.

Instead, I would like to point out how a similar (maybe the same) problem can be seen in the Supreme Court’s most recent copyright opinion. In Golan v. Holder , Justice Ginsburg marginalizes the public domain in a startlingly fashion. Since it is a copyright case, the “model” is flipped around: government is empowered to grant exclusive rights (and restrict some speakers’ freedom) and any restrictions on the government’s power to do so is limited to narrow exceptions, i.e., the idea-expression distinction and fair use. A central argument in the case was that the public domain itself is another restriction. The public domain is not expressly mentioned in the IP Clause of the Constitution, but arguably, it is implicit throughout (Progress in Science and the Useful Arts, Limited Times). Besides, the public domain is inescapably part of the reality that we stand on the shoulders of generations of giants. Most copyright scholars believed that Congress could not grant copyright to works in the public domain (and probably thought that the issue raised in the case – involving restoration for foreign works that had not been granted copyright protection in the U.S — presented an exceptional situation that might be dealt with as such). But the Court declined to rule narrowly and firmly rejected the argument that “the Constitution renders the public domain largely untouchable by Congress.” In the end, Congress appears to have incredibly broad latitude to exercise its power, limited only by the need to preserve the “traditional contours.”

Of course, it is much more troublesome that the Supreme Court (rather than scholars interpreting Supreme Court cases) has adopted a flawed conceptual model that marginalizes basic public infrastructure. We’re stuck with it.