Site Meter

Category: Philosophy of Social Science

Book Symposium: Driesen’s The Economic Dynamics of Law

Next week, we will be hosting a symposium on David Driesen’s book The Economic Dynamics of Law (Cambridge University Press, 2013). The symposium will be held from Mar. 31 to Apr. 3, 2014. As the press’s webpage explains,

This book offers a dynamic theory of law and economics focused on change over time, aimed at avoiding significant systemic risks (like financial crises and climate disruption), and implemented through a systematic analysis of law’s economic incentives and how people actually respond to them. This theory offers a new vision of law as fundamentally a macro-level enterprise establishing normative commitments and a framework for numerous private transactions, rather than as an analogue to a market transaction. It explains how neoclassical law and economics sparked decades of deregulation culminating in the 2008 financial collapse. It then shows how economic dynamic theory helps scholars and policymakers make wise choices about how to avoid future catastrophes while keeping open a robust set of economic opportunities, with individual chapters addressing the law and economics of financial regulation, contract, property, intellectual property, antitrust, national security, and climate disruption.

Our terrific line-up of commenters will include:

Sanja Bogojevic
Brett Frischmann
James Hackney
Michael Livermore
Martha McCluskey
Uma Outka
Arden Rowell
Jennifer Taub

Thanks to them, and to David, for being part of the symposium—we all look forward to the event. Given the topic of the 2014 Phillips Lecture, it’s clear that “avoiding future catastrophes while keeping open a robust set of economic opportunities” is a critical issue for our times.

0

Why Some Risk Sending Intimate Pictures to “Strangers” and What It Says About Privacy

It is, as always, an honor and a pleasure to speak with the Co-Op community. Thank you to Danielle for inviting me back and thank yous all around for inviting me onto your desks, into your laps, or into your hands.

My name is Ari and I teach at New York Law School. In fact, I am honored to have been appointed Associate Professor of Law and Director of the Institute for Information Law and Policy this year at NYLS, an appointment about which I am super excited and will begin this summer. I am also finishing my doctoral dissertation in sociology at Columbia University. My scholarship focuses on the law and policy of Internet social life, and I am particularly focused on online privacy, the injustices and inequalities in unregulated online social spaces, and the digital implications for our cultural creations.

Today, and for most of this month, I want to talk a little bit about the relationship between strangers, intimacy, and privacy.

Over the last 2 years, I have conducted quantitative surveys and qualitative interviews with almost 1,000 users of any of the several gay-oriented geolocation platforms, the most famous of which is “Grindr.” These apps are described (or, derided, if you prefer) as “hook up apps,” or tools that allow gay men to meet each other for sex. That does happen. But the apps also allow members of a tightly identified and discriminated group to meet each other when they move to a knew town and don’t know anyone, to make friends, and to fall in love. Grindr, my survey respondents report, has created more than its fair share of long term relationships and, in equality states, marriages.

But Grindr and its cousins are, at least in part, about sex, which is why the app is one good place to study the prevalence of sharing intimate photographs and the sharers’ rationales. My sample is a random sample of a single population: gay men. Ages range from 18 to 59 (I declined to include anyone who self-reported as underage); locations span the globe. My online survey asked gay men who have used the app for more than one week at any time in the previous 2 years. This allowed me to focus on actual users rather than those just curious. Approximately 68 % of active users reported having sent an intimate picture of themselves to someone they were chatting with. I believe the real number is much higher. Although some of those users anonymized their initial photo, i.e., cropped out their head or something similar, nearly 89 % of users who admitted sending intimates photos to a “stranger” they met online also admitted to ultimately sending an identifiable photo, as well. And, yet, not one respondent reported being victimized, to their knowledge, by recipient misuse of an intimate photograph. Indeed, only a small percentage (1.9) reported being concerned about it or letting it enter into their decision about whether to send the photo in the first place.

I put the word “stranger” in quotes because I contend that the recipients are not really strangers as we traditionally understand the term. And this matters: You can’t share something with a stranger and expect it to remain private. Some people argue you can’t even do that with a close friend: you assume the risk of dissemination when you tell anyone anything, some say. But, at least, the risk is so much higher with strangers such that it is difficult for some to imagine a viable expectation of privacy argument when you chose to share intimate information with a stranger. I disagree. Sharing something with a “stranger” need not always extinguish your expectation of privacy and your right to sue under an applicable privacy tort if the intimate information is shared further.

A sociologist would say that a “stranger” is a person that is unknown or with whom you are not acquainted. The law accepts this definition in at least some respects: sometimes we say that individuals are “strangers in the eyes of the law,” like a legally married same-sex couple when they travel from New Jersey to Mississippi. I argue that the person on the other end of a Grindr chat is not necessarily a stranger because nonverbal social cues of trustworthiness, which can be seen anywhere, are heightened by the social group affinity of an all-gay male environment.

Over the next few weeks, I will tease out the rest of this argument: that trust, and, therefore, expectations of privacy, can exist among strangers. Admittedly, I’m still working it out and I would be grateful for any and all comments in future posts.

Some Brilliant Thoughts on Social Media

The LSE has a consistently illuminating podcast series, but Nick Couldry’s recent lecture really raised the bar. He seamlessly integrates cutting edge media theory into a comprehensive critique of social media’s role in shaping events for us. I was also happy to hear him praise the work of two American scholars I particularly admire: former Co-Op guest blogger Joseph Turow (whose Daily You was described as one of the most influential books of the past decade in media studies), and Julie Cohen (whose Configuring the Networked Self was featured in a symposium here).

I plan on posting some excerpts if I can find a transcript, or a published version of the talk. In the meantime, some more brilliant thoughts on social media, this time from Ian Bogost:

For those of us lucky enough to be employed, we’re really hyperemployed—committed to our usual jobs and many other jobs as well. . . . Hyperemployment offers a subtly different way to characterize all the tiny effort we contribute to Facebook and Instagram and the like. It’s not just that we’ve been duped into contributing free value to technology companies (although that’s also true), but that we’ve tacitly agreed to work unpaid jobs for all these companies. . . . We do tiny bits of work for Google, for Tumblr, for Twitter, all day and every day.

Today, everyone’s a hustler. But now we’re not even just hustling for ourselves or our bosses, but for so many other, unseen bosses. For accounts payable and for marketing; for the Girl Scouts and the Youth Choir; for Facebook and for Google; for our friends via their Kickstarters and their Etsy shops; for Twitter, which just converted years of tiny, aggregated work acts into $78 of fungible value per user.

And perhaps also for the NSA. As participants in 2011′s Digital Labor conference gear up for a reprise, I’m sure we’ll be discussing these ideas.

Management Wants Precarity: A California Ideology for Employment Law

LaborShareThe reader of Talent Wants to be Free effectively gets two books for the price of one. As one of the top legal scholars on the intersection of employment and intellectual property law, Prof. Lobel skillfully describes key concepts and disputes in both areas. Lobel has distilled years of rigorous, careful legal analysis into a series of narratives, theories, and key concepts. Lobel brings legal ideas to life, dramatizing the workplace tensions between loyalty and commitment, control and creativity, better than any work I’ve encountered over the past decade. Her enthusiasm for the subject matter animates the work throughout, making the book a joy to read. Most of the other participants in this symposium have already commented on how successful this aspect of the book is, so I won’t belabor their points.

Talent Want to Be Free also functions as a second kind of book: a management guide. The ending of the first chapter sets up this project, proposing to advise corporate leaders on how to “meet the challenge” of keeping the best performers from leaving, and how “to react when, inevitably, some of these most talented people become competitors” (26). This is a work not only destined for law schools, but also for business schools: for captains of industry eager for new strategies to deploy in the great game of luring and keeping “talent.” Reversing Machiavelli’s famous prescription, Lobel advises the Princes of modern business that it is better to be loved than feared. They should celebrate mobile workers, and should not seek to bind their top employees with burdensome noncompete clauses. Drawing on the work of social scientists like AnnaLee Saxenian (68), Lobel argues that an ecology of innovation depends on workers’ ability to freely move to where their talents are best appreciated.

For Lobel, many restrictions on the free flow of human capital are becoming just as much of a threat to economic prosperity as excess copyright, patent, and trademark protection. Both sets of laws waste resources combating the free flow of information. A firm that trains its workers may want to require them to stay for several years, to recoup its investment (28-29). But Lobel exposes the costs of such a strategy: human capital controls “restrict careers and connections that are born between people” (32). They can also hurt the development of a local talent pool that could, in all likelihood, redound to the benefit of the would-be controlling firm. Trapped in their firms by rigid Massachusetts’ custom and law, Route 128′s talent tended to stagnate. California refused to enforce noncompete clauses, encouraging its knowledge workers to find the firms best able to use their skills.

I have little doubt that Lobel’s book will be assigned in B-schools from Stanford to Wharton. She tells a consistently positive, upbeat story about management techniques to fraternize the incompatibles of personal fulfillment, profit maximization, and regional advantage. But for every normative term that animates her analysis (labor mobility, freedom of contract, innovation, creative or constructive destruction) there is a shadow term (precarity, exploitation, disruption, waste) that goes unexplored. I want to surface a few of these terms, and explore the degree to which they limit the scope or force of Lobel’s message. My worry is that managers will be receptive to the book not because they want talent to be free in the sense of “free speech,” but rather, in the sense of “free beer:” interchangeable cog(nitive unit)s desperately pitching themselves on MTurk and TaskRabbit.
Read More

When “Skin in the Game” is Literal

Back in the Bush years, health policy was all about making sure patients consumers had “skin in the game,” and faced real costs whenever they sought care. More cautious voices worried that patients often didn’t know when to avoid unnecessary care, and when failure to visit a doctor would hurt them. Now there is renewed evidence that the cautionary voices were right:

One-third of US workers now have high-deductible health plans, and those numbers are expected to grow in 2014 as implementation of the Affordable Care Act continues. There is concern that high-deductible health plans might cause enrollees of low socioeconomic status to forgo emergency care as a result of burdensome out-of-pocket costs. . . .Our findings suggest that plan members of low socioeconomic status at small firms responded inappropriately to high-deductible plans and that initial reductions in high-severity ED visits might have increased the need for subsequent hospitalizations. Policy makers and employers should consider proactive strategies to educate high-deductible plan members about their benefit structures or identify members at higher risk of avoiding needed care. They should also consider implementing means-based deductibles.

To put this in more concrete terms: “skin in the game” for many poor families may mean choosing whether to “tough out” a peritonsillar abscess or appendicitis, knowing that the temporary pain may allow them to pay rent, but also may lead to sepsis, necrosis, peritonitis, or death. As Corey Robin has observed, there is a philosophical vision affirming the imposition of such choices, but it’s not utilitarian:

By imposing this drama of choice, the economy becomes a theater of self-disclosure, the stage upon which we discover and reveal our ultimate ends. It is not in the casual chatter of a seminar or the cloistered pews of a church that we determine our values; it is in the duress—the ordeal—of our lived lives, those moments when we are not only free to choose but forced to choose. “Freedom to order our own conduct in the sphere where material circumstances force a choice upon us,” Hayek wrote, “is the air in which alone moral sense grows and in which moral values are daily re-created.”

For some, the choice is between investing in gold or cryptocurrencies; for others, between searing pain and eviction. But the market, in the “skin in the game” vision, is at least distributing these opportunities for self-disclosure through choice to all.

0

Brian Tamanaha’s Straw Men (Part 2): Who’s Cherry Picking?

(Reposted from Brian Leiter’s Law School Reports)

BT Claim 2:  Using more years of data would reduce the earnings premium

BT Quote: There is no doubt that including 1992 to 1995 in their study would measurabley reduce the ‘earnings premium.’” 

Response:  Using more years of historical data is as likely to increase the earnings premium as to reduce it

We have doubts about the effect of more data, even if Professor Tamanaha does not.

Without seeing data that would enable us to calculate earnings premiums, we can’t know for sure if introducing more years of comparable data would increase our estimates of the earnings premium or reduce it.

The issue is not simply the state of the legal market or entry level legal hiring—we must also consider how our control group of bachelor’s degree holders (who appear to be similar to the law degree holders but for the law degree) were doing.   To measure the value of a law degree, we must measure earnings premiums, not absolute earnings levels.

As a commenter on Tamanaha’s blog helpfully points out:

“I think you make far too much of the exclusion of the period from 1992-1995. Entry-level employment was similar to 1995-98 (as indicated by table 2 on page 9).

But this does not necessarily mean that the earnings premium was the same or lower. One cannot form conclusions about all JD holders based solely on entry-level employment numbers. As S&M’s data suggests, the earnings premium tends to be larger during recessions and their immediate aftermath and the U.S. economy only began an economic recovery in late 1992.

Lastly, even if you are right about the earnings premium from 1992-1995, what about 1987-91 when the legal economy appeared to be quite strong (as illustrated by the same chart referenced above)? Your suggestion to look at a twenty year period excludes this time frame even though it might offset the diminution in the earnings premium that would allegedly occur if S&M considered 1992-95.”

There is nothing magical about 1992.  If good quality data were available, why not go back to the 1980s or beyond?   Stephen Diamond and others make this point.

The 1980s are generally believed to be a boom time in the legal market.  Assuming for the sake of the argument that law degree earnings premiums are pro-cyclical (we are not sure if they are), inclusion of more historical data going back past 1992 is just as likely to increase our earnings premium as to reduce it.  Older data might suggest an upward trend in education earnings premiums, which could mean that our assumption of flat earnigns premiums may be too conservative. Leaving aside the data quality and continuity issues we discussed before (which led us to pick 1996 as our start year), there is no objective reason to stop in the early 1990s instead of going back further to the 1980s.

Our sample from 1996 to 2011 includes both good times and bad for law graduates and for the overall economy, and in every part of the cycle, law graduates appear to earn substantially more than similar individuals with only bachelor’s degrees.

 

Cycles

 

This might be as good a place as any to affirm that we certainly did not pick 1996 for any nefarious purpose.  Having worked with the SIPP before and being aware of the change in design, we chose 1996 purely because of the benefits we described here.  Once again, should Professor Tamanaha or any other group wish to use the publicly available SIPP data to extend the series farther back, we’ll be interested to see the results.

0

Brian Tamanaha’s Straw Men (Part 1): Why we used SIPP data from 1996 to 2011

(Reposted from Brian Leiter’s Law School Reports)

 

BT Claim:  We could have used more historical data without introducing continuity and other methodological problems

BT quote:  “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”

Response:  Using more historical data from SIPP would likely have introduced continuity and other methodological problems

SIPP does indeed go back farther than 1996.  We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day.  SIPP was substantially redesigned in 1996 to increase sample size and improve data quality.  Combining different versions of SIPP could have introduced methodological problems.  That doesn’t mean one could not do it in the future, but it might raise as many questions as it would answer.

Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.

Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data.  All else being equal, a larger sample size and more years of data are preferable.  However, data quality issues suggest focusing on more recent data.

If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data.  We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology.  Such adjustments would inevitably have been controversial.

Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes.  There are also gaps in SIPP data from the 1980s because of insufficient funding.

These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.

Changes to the new 1996 version of SIPP include:

Roughly doubling the sample size

This improves the precision of estimates and shrinks standard errors

Lengthening the panels from 3 years to 4 years

This reduces the severity of the regression to the median problem

Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data

Introducing oversampling of low income neighborhoods
This mitigates response bias issues we previously discussed, which are most likely to affect the bottom of the distribution
New income topcoding procedures were instituted with the 1996 Panel
This will affect both means and various points in the distribution
Topcoding is done on a monthly or quarterly basis, and can therefore undercount end of year bonuses, even for those who are not extremely high income year-round

Most government surveys topcode income data—that is, there is a maximum income that they will report.  This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.

Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.

Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.

These are only a subset of the problems extending the SIPP data back past 1996 would have introduced.  For us, the costs of backfilling data appear to outweigh the benefits.  If other parties wish to pursue that course, we’ll be interested in what they find, just as we hope others were interested in our findings.

0

Brian Tamanaha’s Straw Men (Overview)

(Cross posted from Brian Leiter’s Law School Reports)

Brian Tamanaha previously told Inside Higher Education that our research only looked at average earnings premiums and did not consider the low end of the distribution.  Dylan Matthews at the Washington Post reported that Professor Tamanaha’s description of our research was “false”. 

In his latest post, Professor Tamanaha combines interesting critiques with some not very interesting errors and claims that are not supported by data.   Responding to his blog post is a little tricky as his ongoing edits rendered it something of a moving target.  While we’re happy with improvements, a PDF of the version to which we are responding is available here just so we all know what page we’re on.

Stephen Diamond explains why Tamanaha apparently changed his post: Ted Seto and Eric Rasmusen expressed concerns about Tamanaha’s use of ad hominem attacks.

Some of Tamanaha’s new errors are surprising, because they come after an email exchange with him in which we addressed them.  For example, Tamanaha’s description of our approach to ability sorting constitutes a gross misreading of our research.  Tamanaha also references the wrong chart for earnings premium trends and misinterprets confidence intervals.  And his description of our present value calculations is way off the mark.

Here are some quick bullet point responses, with details below in subsequent posts:

  • Forecasting and Backfilling
    • Using more historical data from SIPP would likely have introduced continuity and other methodological problems
    • Using more years of data is as likely to increase the historical earnings premium as to reduce it
    • If pre-1996 historical data finds lower earnings premiums, that may suggest a long term upward trend and could mean that our estimates of flat future earnings premiums are too conservative and the premium estimates should be higher
    • The earnings premium in the future is just as likely to be higher as it is to be lower than it was in 1996-2011
    • In the future, the earnings premium would have to be lower by **85 percent** for an investment in law school to destroy economic value at the median
  • Data sufficiency
    • 16 years of data is more than is used in similar studies to establish a baseline.  This includes studies Tamanaha cited and praised in his book.
    • Our data includes both peaks and troughs in the cycle.  Across the cycle, law graduates earn substantially more than bachelor’s.
  • Tamanaha’s errors and misreading
    • We control for ability sorting and selection using extensive controls for socio-economic, academic, and demographic characteristics
    • This substantially reduces our earnings premium estimates
    • Any lingering ability sorting and selection is likely offset by response bias in SIPP, topcoding, and other problems that cut in the opposite direction
    • Tamanaha references the wrong chart for earnings premium trends and misinterprets confidence intervals
    • Tamanaha is confused about present value, opportunity cost, and discounting
    • Our in-school earnings are based on data, but, in any event, “correcting” to zero would not meaningfully change our conclusions
  • Tamanaha’s best line
    • “Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”

The Locust and the Bee

LocustBeeFables have been in the politico-economic air of late. The FT’s Martin Wolf considered the locust part of a master metaphor for the future of the global economy. He concluded that “the financial crisis was the product of an unstable interaction between ants (excess savers), grasshoppers (excess borrowers) and locusts (the financial sector that intermediated between the two).”

Now Geoff Mulgan has entered the fray with the excellent book The Locust and the Bee: Predators and Creators in Capitalism’s Future. As Mulgan observes,

If you want to make money, you can choose between two fundamentally different strategies. One is to create genuinely new value by bringing resources together in ways that serve people’s wants and needs. The other is to seize value through predation, taking resources, money, or time from others, whether they like it or not.

Read More

Why Do We Lack the Infrastructure that We Need?

Brett Frischmann’s book is a summa of infrastructural theory. Its tone and content approach the catechetical, patiently instructing the reader in each dimension and application of his work. It applies classic economic theory of transport networks and environmental resources to information age dilemmas. It thus takes its place among the liberal “big idea” books of today’s leading Internet scholars (including Benkler’s Wealth of Networks, van Schewick’s Internet Architecture and Innovation, Wu’s Master Switch, Zittrain’s Future of the Internet,and Lessig’s Code.) So careful is its drafting, and so myriad its qualifications and nuances, that is likely consistent with 95% of the policies (and perhaps theories) endorsed in those compelling books. And yet the US almost certainly won’t make the necessary investments in roads, basic research, and other general-purpose inputs that Frischmann promotes. Why is that?

Lawrence Lessig’s career suggests an answer. He presciently “re-marked” on Frischmann’s project in a Minnesota Law Review article. But after a decade at the cutting edge of Internet law, Lessig switched direction entirely. He committed himself to cleaning up the Augean stables of influence on Capitol Hill. He knew that even best academic research would have no practical impact in a corrupted political sphere.

Were Lessig to succeed, I have little doubt that the political system would be more open to ideas like Frischmann’s. Consider, for instance, the moral imperative and economic good sense of public investment in an era of insufficient aggregate demand and near-record-low interest rates:

The cost of borrowing to fund infrastructure projects, [as Economic Policy Institute analyst Ethan Pollack] points out, has hit record “low levels.” And the private construction companies that do infrastructure work remain desperate for contracts. They’re asking for less to do infrastructure work. “In other words,” says Pollack, “we’re getting much more bang for our buck than we usually do.”

And if we spend those bucks on infrastructure, we would also be creating badly needed jobs that could help juice up the economy. Notes Pollack: “This isn’t win-win, this is win-win-win-win.” Yet our political system seems totally incapable of seizing this “win-win-win-win” moment. What explains this incapacity? Center for American Progress analysts David Madland and Nick Bunker, see inequality as the prime culprit.

Read More