Category: Philosophy of Social Science

Social Science in an Era of Corporate Big Data

IsaacWorkingIn my last post, I explored the characteristics of Facebook’s model (i.e., exemplary) users. Today, I want to discuss the model users in the company–i.e., the data scientists who try to build stylized versions of reality (models) based on certain data points and theories. The Facebook emotion experiment is part of a much larger reshaping of social science. To what extent will academics study data driven firms like Facebook, and to what extent will they try to join forces with its own researchers to study others?

Present incentives are clear: collaborate with (rather than develop a critical theory of) big data firms.  As Zeynep Tufekci puts it, “the most valuable datasets have become corporate and proprietary [and] top journals love publishing from them.”  “Big data” has an aura of scientific validity simply because of the velocity, volume, and variety of the phenomena it encompasses. Psychologists certainly must have learned *something* from looking at over 600,000 accounts’ activity, right?

The problem, though is that the corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility.* That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

Grant Getters and Committee Men

Why are journals so interested in this form of research? Why are academics jumping on board? Fortunately, social science has matured to the point that we now have a robust, insightful literature about the nature of social science itself. I know, this probably sounds awfully meta–exactly the type of navel-gazing Senator Coburn would excommunicate from the church of science. But it actually provides a much-needed historical perspective on how power and money shape knowledge. Consider, for instance, the opening of Joel Isaac’s article Tangled Loops, on Cold War social science:

During the first two decades of the Cold War, a new kind of academic figure became prominent in American public life: the credentialed social scientist or expert in the sciences of administration who was also, to use the parlance of the time, a “man of affairs.” Some were academic high-fliers conscripted into government roles in which their intellectual and organizational talents could be exploited. McGeorge Bundy, Walt Rostow, and Robert McNamara are the archetypes of such persons. An overlapping group of scholars became policymakers and political advisers on issues ranging from social welfare provision to nation-building in emerging postcolonial states.

Postwar leaders of the social and administrative sciences such as Talcott Parsons and Herbert Simon were skilled scientific brokers of just this sort: good “committee men,” grant-getters, proponents of interdisciplinary inquiry, and institution-builders. This hard-nosed, suit-wearing, business-like persona was connected to new, technologically refined forms of social science. . . . Antediluvian “social science” was eschewed in favour of mathematical, behavioural, and systems-based approaches to “human relations” such as operations research, behavioral science, game theory, systems theory, and cognitive science.

One of Isaac’s major contributions in that piece is to interpret the social science coming out of the academy (and entities like RAND) as a cultural practice: “Insofar as theories involve certain forms of practice, they are caught up in worldly, quotidian matters: performances, comportments, training regimes, and so on.” Government leveraged funding to mobilize research to specific ends. To maintain university patronage systems and research centers, leaders had to be on good terms with the grantors. The common goal of strengthening the US economy (and defeating the communist threat) cemented an ideological alliance.

Government still exerts influence in American social and behavioral sciences. But private industry controls critical data sets for the most glamorous, data-driven research. In the Cold War era, “grant getting” may have been the key to economic security, and to securing one’s voice in the university. Today, “exit” options are more important than voice, and what better place to exit to than an internet platform? Thus academic/corporate “flexians” shuttle between the two worlds. Their research cannot be too venal, lest the academy disdain it. But neither can it indulge in, say, critical theory (what would nonprofit social networks look like), just as Cold War social scientists were ill-advised to, say, develop Myrdal’s or Leontief’s theories. There was a lot more money available for the Friedmanite direction economics would, eventually, take.

Intensifying academic precarity also makes the blandishments of corporate data science an “offer one can’t refuse.” Tenured jobs are growing scarcer. As MOOCmongers aspire to deskill and commoditize the academy, industry’s benefits and flexibility grow ever more alluring. Academic IRBs can impose a heavy bureaucratic burden; the corporate world is far more flexible. (Consider all the defenses of the Facebook authored last week which emphasized how little review corporate research has to go through: satisfy the boss, and you’re basically done, no matter how troubling your aims or methods may be in a purely academic context.)

Creating Kinds

So why does all this matter, other than to the quantitatively gifted individuals at the cutting edge of data science? It matters because, in Isaac’s words:

Theories and classifications in the human sciences do not “discover” an independently existing reality; they help, in part, to create it. Much of this comes down to the publicity of knowledge. Insofar as scientific descriptions of people are made available to the public, they may “change how we can think of ourselves, [and] change our sense of self-worth, even how we remember our own past.

It is very hard to develop categories and kinds for internet firms, because they are so secretive about most of their operations. (And make no mistake about the current PR kerfuffle for Facebook: it will lead the company to become ever more secretive about its data science, just as Target started camouflaging its pregnancy-related ads and not talking to reporters after people appeared creeped out by the uncanny accuracy of its natal predictions.) But the data collection of the firms is creating whole new kinds of people—for marketers, for the NSA, and for anyone with the money or connections to access the information.

More likely than not, encoded in Facebook’s database is some new, milder DSM, with categories like the slightly stingy (who need to be induced to buy more); the profligate, who need frugality prompts; the creepy, who need to be hidden in newsfeeds lest they bum out the cool. Our new “Science Mart” creates these new human kinds, but also alters them, as “new sorting and theorizing induces changes in self-conception and in behavior of the people classified.” Perhaps in the future, upon being classified as “slightly depressed” by Facebook, users will see more happy posts. Perhaps the hypomanic will be brought down a bit. Or, perhaps if their state is better for business, it will be cultivated and promoted.

You may think that last possibility unfair, or a mischaracterization of the power of Facebook. But shouldn’t children have been excluded from its emotion experiment? Shouldn’t those whom it suspects may be clinically depressed? Shouldn’t some independent reviewer have asked about those possibilities? Journalists try to reassure us that Facebook is better now than it was 2 years ago. But the power imbalances in social science remain as funding cuts threaten researchers’ autonomy. Until research in general is properly valued, we can expect more psychologists, anthropologists, and data scientists to attune themselves to corporate research agendas, rather than questioning why data about users is so much more available than data about company practices.

Image Note: I’ve inserted a picture of Isaac’s book, which I highly recommend to readers interested in the history of social science.

*I suggested this was a problem in 2010.

Book Symposium: Driesen’s The Economic Dynamics of Law

Next week, we will be hosting a symposium on David Driesen’s book The Economic Dynamics of Law (Cambridge University Press, 2013). The symposium will be held from Mar. 31 to Apr. 3, 2014. As the press’s webpage explains,

This book offers a dynamic theory of law and economics focused on change over time, aimed at avoiding significant systemic risks (like financial crises and climate disruption), and implemented through a systematic analysis of law’s economic incentives and how people actually respond to them. This theory offers a new vision of law as fundamentally a macro-level enterprise establishing normative commitments and a framework for numerous private transactions, rather than as an analogue to a market transaction. It explains how neoclassical law and economics sparked decades of deregulation culminating in the 2008 financial collapse. It then shows how economic dynamic theory helps scholars and policymakers make wise choices about how to avoid future catastrophes while keeping open a robust set of economic opportunities, with individual chapters addressing the law and economics of financial regulation, contract, property, intellectual property, antitrust, national security, and climate disruption.

Our terrific line-up of commenters will include:

Sanja Bogojevic
Brett Frischmann
James Hackney
Michael Livermore
Martha McCluskey
Uma Outka
Arden Rowell
Jennifer Taub

Thanks to them, and to David, for being part of the symposium—we all look forward to the event. Given the topic of the 2014 Phillips Lecture, it’s clear that “avoiding future catastrophes while keeping open a robust set of economic opportunities” is a critical issue for our times.

0

Why Some Risk Sending Intimate Pictures to “Strangers” and What It Says About Privacy

It is, as always, an honor and a pleasure to speak with the Co-Op community. Thank you to Danielle for inviting me back and thank yous all around for inviting me onto your desks, into your laps, or into your hands.

My name is Ari and I teach at New York Law School. In fact, I am honored to have been appointed Associate Professor of Law and Director of the Institute for Information Law and Policy this year at NYLS, an appointment about which I am super excited and will begin this summer. I am also finishing my doctoral dissertation in sociology at Columbia University. My scholarship focuses on the law and policy of Internet social life, and I am particularly focused on online privacy, the injustices and inequalities in unregulated online social spaces, and the digital implications for our cultural creations.

Today, and for most of this month, I want to talk a little bit about the relationship between strangers, intimacy, and privacy.

Over the last 2 years, I have conducted quantitative surveys and qualitative interviews with almost 1,000 users of any of the several gay-oriented geolocation platforms, the most famous of which is “Grindr.” These apps are described (or, derided, if you prefer) as “hook up apps,” or tools that allow gay men to meet each other for sex. That does happen. But the apps also allow members of a tightly identified and discriminated group to meet each other when they move to a knew town and don’t know anyone, to make friends, and to fall in love. Grindr, my survey respondents report, has created more than its fair share of long term relationships and, in equality states, marriages.

But Grindr and its cousins are, at least in part, about sex, which is why the app is one good place to study the prevalence of sharing intimate photographs and the sharers’ rationales. My sample is a random sample of a single population: gay men. Ages range from 18 to 59 (I declined to include anyone who self-reported as underage); locations span the globe. My online survey asked gay men who have used the app for more than one week at any time in the previous 2 years. This allowed me to focus on actual users rather than those just curious. Approximately 68 % of active users reported having sent an intimate picture of themselves to someone they were chatting with. I believe the real number is much higher. Although some of those users anonymized their initial photo, i.e., cropped out their head or something similar, nearly 89 % of users who admitted sending intimates photos to a “stranger” they met online also admitted to ultimately sending an identifiable photo, as well. And, yet, not one respondent reported being victimized, to their knowledge, by recipient misuse of an intimate photograph. Indeed, only a small percentage (1.9) reported being concerned about it or letting it enter into their decision about whether to send the photo in the first place.

I put the word “stranger” in quotes because I contend that the recipients are not really strangers as we traditionally understand the term. And this matters: You can’t share something with a stranger and expect it to remain private. Some people argue you can’t even do that with a close friend: you assume the risk of dissemination when you tell anyone anything, some say. But, at least, the risk is so much higher with strangers such that it is difficult for some to imagine a viable expectation of privacy argument when you chose to share intimate information with a stranger. I disagree. Sharing something with a “stranger” need not always extinguish your expectation of privacy and your right to sue under an applicable privacy tort if the intimate information is shared further.

A sociologist would say that a “stranger” is a person that is unknown or with whom you are not acquainted. The law accepts this definition in at least some respects: sometimes we say that individuals are “strangers in the eyes of the law,” like a legally married same-sex couple when they travel from New Jersey to Mississippi. I argue that the person on the other end of a Grindr chat is not necessarily a stranger because nonverbal social cues of trustworthiness, which can be seen anywhere, are heightened by the social group affinity of an all-gay male environment.

Over the next few weeks, I will tease out the rest of this argument: that trust, and, therefore, expectations of privacy, can exist among strangers. Admittedly, I’m still working it out and I would be grateful for any and all comments in future posts.

Some Brilliant Thoughts on Social Media

The LSE has a consistently illuminating podcast series, but Nick Couldry’s recent lecture really raised the bar. He seamlessly integrates cutting edge media theory into a comprehensive critique of social media’s role in shaping events for us. I was also happy to hear him praise the work of two American scholars I particularly admire: former Co-Op guest blogger Joseph Turow (whose Daily You was described as one of the most influential books of the past decade in media studies), and Julie Cohen (whose Configuring the Networked Self was featured in a symposium here).

I plan on posting some excerpts if I can find a transcript, or a published version of the talk. In the meantime, some more brilliant thoughts on social media, this time from Ian Bogost:

For those of us lucky enough to be employed, we’re really hyperemployed—committed to our usual jobs and many other jobs as well. . . . Hyperemployment offers a subtly different way to characterize all the tiny effort we contribute to Facebook and Instagram and the like. It’s not just that we’ve been duped into contributing free value to technology companies (although that’s also true), but that we’ve tacitly agreed to work unpaid jobs for all these companies. . . . We do tiny bits of work for Google, for Tumblr, for Twitter, all day and every day.

Today, everyone’s a hustler. But now we’re not even just hustling for ourselves or our bosses, but for so many other, unseen bosses. For accounts payable and for marketing; for the Girl Scouts and the Youth Choir; for Facebook and for Google; for our friends via their Kickstarters and their Etsy shops; for Twitter, which just converted years of tiny, aggregated work acts into $78 of fungible value per user.

And perhaps also for the NSA. As participants in 2011’s Digital Labor conference gear up for a reprise, I’m sure we’ll be discussing these ideas.

Management Wants Precarity: A California Ideology for Employment Law

LaborShareThe reader of Talent Wants to be Free effectively gets two books for the price of one. As one of the top legal scholars on the intersection of employment and intellectual property law, Prof. Lobel skillfully describes key concepts and disputes in both areas. Lobel has distilled years of rigorous, careful legal analysis into a series of narratives, theories, and key concepts. Lobel brings legal ideas to life, dramatizing the workplace tensions between loyalty and commitment, control and creativity, better than any work I’ve encountered over the past decade. Her enthusiasm for the subject matter animates the work throughout, making the book a joy to read. Most of the other participants in this symposium have already commented on how successful this aspect of the book is, so I won’t belabor their points.

Talent Want to Be Free also functions as a second kind of book: a management guide. The ending of the first chapter sets up this project, proposing to advise corporate leaders on how to “meet the challenge” of keeping the best performers from leaving, and how “to react when, inevitably, some of these most talented people become competitors” (26). This is a work not only destined for law schools, but also for business schools: for captains of industry eager for new strategies to deploy in the great game of luring and keeping “talent.” Reversing Machiavelli’s famous prescription, Lobel advises the Princes of modern business that it is better to be loved than feared. They should celebrate mobile workers, and should not seek to bind their top employees with burdensome noncompete clauses. Drawing on the work of social scientists like AnnaLee Saxenian (68), Lobel argues that an ecology of innovation depends on workers’ ability to freely move to where their talents are best appreciated.

For Lobel, many restrictions on the free flow of human capital are becoming just as much of a threat to economic prosperity as excess copyright, patent, and trademark protection. Both sets of laws waste resources combating the free flow of information. A firm that trains its workers may want to require them to stay for several years, to recoup its investment (28-29). But Lobel exposes the costs of such a strategy: human capital controls “restrict careers and connections that are born between people” (32). They can also hurt the development of a local talent pool that could, in all likelihood, redound to the benefit of the would-be controlling firm. Trapped in their firms by rigid Massachusetts’ custom and law, Route 128’s talent tended to stagnate. California refused to enforce noncompete clauses, encouraging its knowledge workers to find the firms best able to use their skills.

I have little doubt that Lobel’s book will be assigned in B-schools from Stanford to Wharton. She tells a consistently positive, upbeat story about management techniques to fraternize the incompatibles of personal fulfillment, profit maximization, and regional advantage. But for every normative term that animates her analysis (labor mobility, freedom of contract, innovation, creative or constructive destruction) there is a shadow term (precarity, exploitation, disruption, waste) that goes unexplored. I want to surface a few of these terms, and explore the degree to which they limit the scope or force of Lobel’s message. My worry is that managers will be receptive to the book not because they want talent to be free in the sense of “free speech,” but rather, in the sense of “free beer:” interchangeable cog(nitive unit)s desperately pitching themselves on MTurk and TaskRabbit.
Read More

When “Skin in the Game” is Literal

Back in the Bush years, health policy was all about making sure patients consumers had “skin in the game,” and faced real costs whenever they sought care. More cautious voices worried that patients often didn’t know when to avoid unnecessary care, and when failure to visit a doctor would hurt them. Now there is renewed evidence that the cautionary voices were right:

One-third of US workers now have high-deductible health plans, and those numbers are expected to grow in 2014 as implementation of the Affordable Care Act continues. There is concern that high-deductible health plans might cause enrollees of low socioeconomic status to forgo emergency care as a result of burdensome out-of-pocket costs. . . .Our findings suggest that plan members of low socioeconomic status at small firms responded inappropriately to high-deductible plans and that initial reductions in high-severity ED visits might have increased the need for subsequent hospitalizations. Policy makers and employers should consider proactive strategies to educate high-deductible plan members about their benefit structures or identify members at higher risk of avoiding needed care. They should also consider implementing means-based deductibles.

To put this in more concrete terms: “skin in the game” for many poor families may mean choosing whether to “tough out” a peritonsillar abscess or appendicitis, knowing that the temporary pain may allow them to pay rent, but also may lead to sepsis, necrosis, peritonitis, or death. As Corey Robin has observed, there is a philosophical vision affirming the imposition of such choices, but it’s not utilitarian:

By imposing this drama of choice, the economy becomes a theater of self-disclosure, the stage upon which we discover and reveal our ultimate ends. It is not in the casual chatter of a seminar or the cloistered pews of a church that we determine our values; it is in the duress—the ordeal—of our lived lives, those moments when we are not only free to choose but forced to choose. “Freedom to order our own conduct in the sphere where material circumstances force a choice upon us,” Hayek wrote, “is the air in which alone moral sense grows and in which moral values are daily re-created.”

For some, the choice is between investing in gold or cryptocurrencies; for others, between searing pain and eviction. But the market, in the “skin in the game” vision, is at least distributing these opportunities for self-disclosure through choice to all.

0

Brian Tamanaha’s Straw Men (Part 2): Who’s Cherry Picking?

(Reposted from Brian Leiter’s Law School Reports)

BT Claim 2:  Using more years of data would reduce the earnings premium

BT Quote: There is no doubt that including 1992 to 1995 in their study would measurabley reduce the ‘earnings premium.'” 

Response:  Using more years of historical data is as likely to increase the earnings premium as to reduce it

We have doubts about the effect of more data, even if Professor Tamanaha does not.

Without seeing data that would enable us to calculate earnings premiums, we can’t know for sure if introducing more years of comparable data would increase our estimates of the earnings premium or reduce it.

The issue is not simply the state of the legal market or entry level legal hiring—we must also consider how our control group of bachelor’s degree holders (who appear to be similar to the law degree holders but for the law degree) were doing.   To measure the value of a law degree, we must measure earnings premiums, not absolute earnings levels.

As a commenter on Tamanaha’s blog helpfully points out:

“I think you make far too much of the exclusion of the period from 1992-1995. Entry-level employment was similar to 1995-98 (as indicated by table 2 on page 9).

But this does not necessarily mean that the earnings premium was the same or lower. One cannot form conclusions about all JD holders based solely on entry-level employment numbers. As S&M’s data suggests, the earnings premium tends to be larger during recessions and their immediate aftermath and the U.S. economy only began an economic recovery in late 1992.

Lastly, even if you are right about the earnings premium from 1992-1995, what about 1987-91 when the legal economy appeared to be quite strong (as illustrated by the same chart referenced above)? Your suggestion to look at a twenty year period excludes this time frame even though it might offset the diminution in the earnings premium that would allegedly occur if S&M considered 1992-95.”

There is nothing magical about 1992.  If good quality data were available, why not go back to the 1980s or beyond?   Stephen Diamond and others make this point.

The 1980s are generally believed to be a boom time in the legal market.  Assuming for the sake of the argument that law degree earnings premiums are pro-cyclical (we are not sure if they are), inclusion of more historical data going back past 1992 is just as likely to increase our earnings premium as to reduce it.  Older data might suggest an upward trend in education earnings premiums, which could mean that our assumption of flat earnigns premiums may be too conservative. Leaving aside the data quality and continuity issues we discussed before (which led us to pick 1996 as our start year), there is no objective reason to stop in the early 1990s instead of going back further to the 1980s.

Our sample from 1996 to 2011 includes both good times and bad for law graduates and for the overall economy, and in every part of the cycle, law graduates appear to earn substantially more than similar individuals with only bachelor’s degrees.

 

Cycles

 

This might be as good a place as any to affirm that we certainly did not pick 1996 for any nefarious purpose.  Having worked with the SIPP before and being aware of the change in design, we chose 1996 purely because of the benefits we described here.  Once again, should Professor Tamanaha or any other group wish to use the publicly available SIPP data to extend the series farther back, we’ll be interested to see the results.

0

Brian Tamanaha’s Straw Men (Part 1): Why we used SIPP data from 1996 to 2011

(Reposted from Brian Leiter’s Law School Reports)

 

BT Claim:  We could have used more historical data without introducing continuity and other methodological problems

BT quote:  “Although SIPP was redesigned in 1996, there are surveys for 1993 and 1992, which allow continuity . . .”

Response:  Using more historical data from SIPP would likely have introduced continuity and other methodological problems

SIPP does indeed go back farther than 1996.  We chose that date because it was the beginning of an updated and revitalized SIPP that continues to this day.  SIPP was substantially redesigned in 1996 to increase sample size and improve data quality.  Combining different versions of SIPP could have introduced methodological problems.  That doesn’t mean one could not do it in the future, but it might raise as many questions as it would answer.

Had we used earlier data, it could be difficult to know to what extent changes to our earnings premiums estimates were caused by changes in the real world, and to what extent they were artifacts caused by changes to the SIPP methodology.

Because SIPP has developed and improved over time, the more recent data is more reliable than older historical data.  All else being equal, a larger sample size and more years of data are preferable.  However, data quality issues suggest focusing on more recent data.

If older data were included, it probably would have been appropriate to weight more recent and higher quality data more heavily than older and lower quality data.  We would likely also have had to make adjustments for differences that might have been caused by changes in survey methodology.  Such adjustments would inevitably have been controversial.

Because the sample size increased dramatically after 1996, including a few years of pre 1996 data would not provide as much new data or have the potential to change our estimates by nearly as much as Professor Tamanaha believes.  There are also gaps in SIPP data from the 1980s because of insufficient funding.

These issues and the 1996 changes are explained at length in the Survey of Income and Program Participation User’s Guide.

Changes to the new 1996 version of SIPP include:

Roughly doubling the sample size

This improves the precision of estimates and shrinks standard errors

Lengthening the panels from 3 years to 4 years

This reduces the severity of the regression to the median problem

Introducing computer assisted interviewing to improve data collection and reduce errors or the need to impute for missing data

Introducing oversampling of low income neighborhoods
This mitigates response bias issues we previously discussed, which are most likely to affect the bottom of the distribution
New income topcoding procedures were instituted with the 1996 Panel
This will affect both means and various points in the distribution
Topcoding is done on a monthly or quarterly basis, and can therefore undercount end of year bonuses, even for those who are not extremely high income year-round

Most government surveys topcode income data—that is, there is a maximum income that they will report.  This is done to protect the privacy of high-income individuals who could more easily be identified from ostensibly confidential survey data if their incomes were revealed.

Because law graduates tend to have higher incomes than bachelor’s, topcoding introduces downward bias to earnings premiums estimates. Midstream changes to topcoding procedures can change this bias and create problems with respect to consistency and continuity.

Without going into more detail, the topcoding procedure that began in 1996 appears to be an improvement over the earlier topcoding procedure.

These are only a subset of the problems extending the SIPP data back past 1996 would have introduced.  For us, the costs of backfilling data appear to outweigh the benefits.  If other parties wish to pursue that course, we’ll be interested in what they find, just as we hope others were interested in our findings.

0

Brian Tamanaha’s Straw Men (Overview)

(Cross posted from Brian Leiter’s Law School Reports)

Brian Tamanaha previously told Inside Higher Education that our research only looked at average earnings premiums and did not consider the low end of the distribution.  Dylan Matthews at the Washington Post reported that Professor Tamanaha’s description of our research was “false”. 

In his latest post, Professor Tamanaha combines interesting critiques with some not very interesting errors and claims that are not supported by data.   Responding to his blog post is a little tricky as his ongoing edits rendered it something of a moving target.  While we’re happy with improvements, a PDF of the version to which we are responding is available here just so we all know what page we’re on.

Stephen Diamond explains why Tamanaha apparently changed his post: Ted Seto and Eric Rasmusen expressed concerns about Tamanaha’s use of ad hominem attacks.

Some of Tamanaha’s new errors are surprising, because they come after an email exchange with him in which we addressed them.  For example, Tamanaha’s description of our approach to ability sorting constitutes a gross misreading of our research.  Tamanaha also references the wrong chart for earnings premium trends and misinterprets confidence intervals.  And his description of our present value calculations is way off the mark.

Here are some quick bullet point responses, with details below in subsequent posts:

  • Forecasting and Backfilling
    • Using more historical data from SIPP would likely have introduced continuity and other methodological problems
    • Using more years of data is as likely to increase the historical earnings premium as to reduce it
    • If pre-1996 historical data finds lower earnings premiums, that may suggest a long term upward trend and could mean that our estimates of flat future earnings premiums are too conservative and the premium estimates should be higher
    • The earnings premium in the future is just as likely to be higher as it is to be lower than it was in 1996-2011
    • In the future, the earnings premium would have to be lower by **85 percent** for an investment in law school to destroy economic value at the median
  • Data sufficiency
    • 16 years of data is more than is used in similar studies to establish a baseline.  This includes studies Tamanaha cited and praised in his book.
    • Our data includes both peaks and troughs in the cycle.  Across the cycle, law graduates earn substantially more than bachelor’s.
  • Tamanaha’s errors and misreading
    • We control for ability sorting and selection using extensive controls for socio-economic, academic, and demographic characteristics
    • This substantially reduces our earnings premium estimates
    • Any lingering ability sorting and selection is likely offset by response bias in SIPP, topcoding, and other problems that cut in the opposite direction
    • Tamanaha references the wrong chart for earnings premium trends and misinterprets confidence intervals
    • Tamanaha is confused about present value, opportunity cost, and discounting
    • Our in-school earnings are based on data, but, in any event, “correcting” to zero would not meaningfully change our conclusions
  • Tamanaha’s best line
    • “Let me also confirm that [Simkovic & McIntyre’s] study is far more sophisticated than my admittedly crude efforts.”

The Locust and the Bee

LocustBeeFables have been in the politico-economic air of late. The FT’s Martin Wolf considered the locust part of a master metaphor for the future of the global economy. He concluded that “the financial crisis was the product of an unstable interaction between ants (excess savers), grasshoppers (excess borrowers) and locusts (the financial sector that intermediated between the two).”

Now Geoff Mulgan has entered the fray with the excellent book The Locust and the Bee: Predators and Creators in Capitalism’s Future. As Mulgan observes,

If you want to make money, you can choose between two fundamentally different strategies. One is to create genuinely new value by bringing resources together in ways that serve people’s wants and needs. The other is to seize value through predation, taking resources, money, or time from others, whether they like it or not.

Read More