Category: Political Economy

European Parliament Resolution on Google

The European Parliament voted 384 – 174 today in favor of a “resolution on Supporting Consumer Rights in the Digital Single Market.” The text of the resolution:

Stresses that all internet traffic should be treated equally, without discrimination, restriction or interference, independently of its sender, receiver, type, content, device, service or application;

Notes that the online search market is of particular importance in ensuring competitive conditions within the Digital Single Market, given the potential development of search engines into gatekeepers and their possibility of commercialising secondary exploitation of obtained information; therefore calls on the Commission to enforce EU competition rules decisively, based on input from all relevant stakeholders and taking into account the entire structure of the Digital Single Market in order to ensure remedies that truly benefit consumers, internet users and online businesses; furthermore calls on the Commission to consider proposals with the aim of unbundling search engines from other commercial services as one potential long-term solution to achieve the previously mentioned aims;

Stresses that when using search engines, the search process and results should be unbiased in order to keep internet search non-discriminatory, to ensure more competition and choice for users and consumers and to maintain the diversity of sources of information; therefore notes that indexation, evaluation, presentation and ranking by search engines must be unbiased and transparent, while for interlinked services, search engines must guarantee full transparency when showing search results; calls on Commission to prevent any abuse in the marketing of interlinked services by operators of search engines;

Some in the US tech press has played this up as an incipient effort to “break up” Google, with predictable derision at “technopanic.” (Few tend to reflect on whether the 173 former firms listed here really need to be part of one big company.) But the resolution’s linking of net and search neutrality suggests other regulatory approaches (prefigured in my 2008 paper Internet Nondiscrimination Principles: Commercial Ethics for Carriers and Search Engines). I’ve developed these ideas over the years, and I hope my recently released book‘s chapters on search and digital regulation will be of some use to policymakers. Without some regulatory oversight and supervision, our black box society will only get more opaque.

From Piketty to Law and Political Economy

Thomas Piketty’s Capital in the 21st Century continues to spur debate among economists. It has many lessons for attorneys, as well. But does law have something to offer in return? I make that case in my review of Capital, focusing on Piketty’s call for a renewal of the social science of political economy. My review underscores the complexity of the relationship between law and social science. Legal academics import ideas from other fields, but also return the favor by informing those fields. Ideally, the process is dialectic, with lawyers and social scientists in dialogue.

At the conference Critiquing Cost-Benefit Analysis of Financial Regulation, I saw that process first hand in May. We at the Association of Professors of Political Economy and the Law (APPEAL) are planning further events and projects to continue that dialogue.

I also saw a renewed synergy between law and social sciences at the Rethinking Economics conference last month. Economists inquired about bankruptcy law to better understand the roots of the financial crisis, and identified the limits that pension law places on certain types of investment strategies.

Some of the organizers of the conference recently took the argument in a new direction, focusing on the interaction between Modern Monetary Theory (MMT) and campaign finance reform. “Leveling up” modes of campaign finance reform have often stalled because taxpayers balk at funding political campaigns. Given that private campaign funders’ return on investment has been estimated at 22,000%, that seems an unwise concession to crony capitalism. So how do we get movement on the issue?
Read More

Interview on The Black Box Society

BBSBalkinization just published an interview on my forthcoming book, The Black Box Society. Law profs may be interested in our dialogue on methodology—particularly, what the unique role of the legal scholar is in the midst of increasing academic specialization. I’ve tried to surface several strands of inspiration for the book.

Social Science in an Era of Corporate Big Data

IsaacWorkingIn my last post, I explored the characteristics of Facebook’s model (i.e., exemplary) users. Today, I want to discuss the model users in the company–i.e., the data scientists who try to build stylized versions of reality (models) based on certain data points and theories. The Facebook emotion experiment is part of a much larger reshaping of social science. To what extent will academics study data driven firms like Facebook, and to what extent will they try to join forces with its own researchers to study others?

Present incentives are clear: collaborate with (rather than develop a critical theory of) big data firms.  As Zeynep Tufekci puts it, “the most valuable datasets have become corporate and proprietary [and] top journals love publishing from them.”  “Big data” has an aura of scientific validity simply because of the velocity, volume, and variety of the phenomena it encompasses. Psychologists certainly must have learned *something* from looking at over 600,000 accounts’ activity, right?

The problem, though is that the corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility.* That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

Grant Getters and Committee Men

Why are journals so interested in this form of research? Why are academics jumping on board? Fortunately, social science has matured to the point that we now have a robust, insightful literature about the nature of social science itself. I know, this probably sounds awfully meta–exactly the type of navel-gazing Senator Coburn would excommunicate from the church of science. But it actually provides a much-needed historical perspective on how power and money shape knowledge. Consider, for instance, the opening of Joel Isaac’s article Tangled Loops, on Cold War social science:

During the first two decades of the Cold War, a new kind of academic figure became prominent in American public life: the credentialed social scientist or expert in the sciences of administration who was also, to use the parlance of the time, a “man of affairs.” Some were academic high-fliers conscripted into government roles in which their intellectual and organizational talents could be exploited. McGeorge Bundy, Walt Rostow, and Robert McNamara are the archetypes of such persons. An overlapping group of scholars became policymakers and political advisers on issues ranging from social welfare provision to nation-building in emerging postcolonial states.

Postwar leaders of the social and administrative sciences such as Talcott Parsons and Herbert Simon were skilled scientific brokers of just this sort: good “committee men,” grant-getters, proponents of interdisciplinary inquiry, and institution-builders. This hard-nosed, suit-wearing, business-like persona was connected to new, technologically refined forms of social science. . . . Antediluvian “social science” was eschewed in favour of mathematical, behavioural, and systems-based approaches to “human relations” such as operations research, behavioral science, game theory, systems theory, and cognitive science.

One of Isaac’s major contributions in that piece is to interpret the social science coming out of the academy (and entities like RAND) as a cultural practice: “Insofar as theories involve certain forms of practice, they are caught up in worldly, quotidian matters: performances, comportments, training regimes, and so on.” Government leveraged funding to mobilize research to specific ends. To maintain university patronage systems and research centers, leaders had to be on good terms with the grantors. The common goal of strengthening the US economy (and defeating the communist threat) cemented an ideological alliance.

Government still exerts influence in American social and behavioral sciences. But private industry controls critical data sets for the most glamorous, data-driven research. In the Cold War era, “grant getting” may have been the key to economic security, and to securing one’s voice in the university. Today, “exit” options are more important than voice, and what better place to exit to than an internet platform? Thus academic/corporate “flexians” shuttle between the two worlds. Their research cannot be too venal, lest the academy disdain it. But neither can it indulge in, say, critical theory (what would nonprofit social networks look like), just as Cold War social scientists were ill-advised to, say, develop Myrdal’s or Leontief’s theories. There was a lot more money available for the Friedmanite direction economics would, eventually, take.

Intensifying academic precarity also makes the blandishments of corporate data science an “offer one can’t refuse.” Tenured jobs are growing scarcer. As MOOCmongers aspire to deskill and commoditize the academy, industry’s benefits and flexibility grow ever more alluring. Academic IRBs can impose a heavy bureaucratic burden; the corporate world is far more flexible. (Consider all the defenses of the Facebook authored last week which emphasized how little review corporate research has to go through: satisfy the boss, and you’re basically done, no matter how troubling your aims or methods may be in a purely academic context.)

Creating Kinds

So why does all this matter, other than to the quantitatively gifted individuals at the cutting edge of data science? It matters because, in Isaac’s words:

Theories and classifications in the human sciences do not “discover” an independently existing reality; they help, in part, to create it. Much of this comes down to the publicity of knowledge. Insofar as scientific descriptions of people are made available to the public, they may “change how we can think of ourselves, [and] change our sense of self-worth, even how we remember our own past.

It is very hard to develop categories and kinds for internet firms, because they are so secretive about most of their operations. (And make no mistake about the current PR kerfuffle for Facebook: it will lead the company to become ever more secretive about its data science, just as Target started camouflaging its pregnancy-related ads and not talking to reporters after people appeared creeped out by the uncanny accuracy of its natal predictions.) But the data collection of the firms is creating whole new kinds of people—for marketers, for the NSA, and for anyone with the money or connections to access the information.

More likely than not, encoded in Facebook’s database is some new, milder DSM, with categories like the slightly stingy (who need to be induced to buy more); the profligate, who need frugality prompts; the creepy, who need to be hidden in newsfeeds lest they bum out the cool. Our new “Science Mart” creates these new human kinds, but also alters them, as “new sorting and theorizing induces changes in self-conception and in behavior of the people classified.” Perhaps in the future, upon being classified as “slightly depressed” by Facebook, users will see more happy posts. Perhaps the hypomanic will be brought down a bit. Or, perhaps if their state is better for business, it will be cultivated and promoted.

You may think that last possibility unfair, or a mischaracterization of the power of Facebook. But shouldn’t children have been excluded from its emotion experiment? Shouldn’t those whom it suspects may be clinically depressed? Shouldn’t some independent reviewer have asked about those possibilities? Journalists try to reassure us that Facebook is better now than it was 2 years ago. But the power imbalances in social science remain as funding cuts threaten researchers’ autonomy. Until research in general is properly valued, we can expect more psychologists, anthropologists, and data scientists to attune themselves to corporate research agendas, rather than questioning why data about users is so much more available than data about company practices.

Image Note: I’ve inserted a picture of Isaac’s book, which I highly recommend to readers interested in the history of social science.

*I suggested this was a problem in 2010.

Facebook’s Hidden Persuaders

hidden-persuadersMajor internet platforms are constantly trying new things out on users, to better change their interfaces. Perhaps they’re interested in changing their users, too. Consider this account of Facebook’s manipulation of its newsfeed:

If you were feeling glum in January 2012, it might not have been you. Facebook ran an experiment on 689,003 users to see if it could manipulate their emotions. One experimental group had stories with positive words like “love” and “nice” filtered out of their News Feeds; another experimental group had stories with negative words like “hurt” and “nasty” filtered out. And indeed, people who saw fewer positive posts created fewer of their own. Facebook made them sad for a psych experiment.

James Grimmelmann suggests some potential legal and ethical pitfalls. Julie Cohen has dissected the larger political economy of modulation. For now, I’d just like to present a subtle shift in Silicon Valley rhetoric:

c. 2008: “How dare you suggest we’d manipulate our users! What a paranoid view.”
c. 2014: “Of course we manipulate users! That’s how we optimize time-on-machine.”

There are many cards in the denialists’ deck. An earlier Facebook-inspired study warns of “greater spikes in global emotion that could generate increased volatility in everything from political systems to financial markets.” Perhaps social networks will take on the dampening of inconvenient emotions as a public service. For a few glimpses of the road ahead, take a look at Bernard Harcourt (on Zunzuneo), Jonathan Zittrain, Robert Epstein, and N. Katherine Hayles.

A More Nuanced View of Legal Automation

A Guardian writer has updated Farhad Manjoo’s classic report, “Will a Robot Steal Your Job?” Of course, lawyers are in the crosshairs. As Julius Stone noted in The Legal System and Lawyers’ Reasoning, scholars have addressed the automation of legal processes since at least the 1960s. Al Gore now says that a “new algorithm . . . makes it possible for one first year lawyer to do the same amount of legal research that used to require 500.”* But when one actually reads the studies trumpeted by the prophets of disruption, a more nuanced perspective emerges.

Let’s start with the experts cited first in the article:

Oxford academics Carl Benedikt Frey and Michael A Osborne have predicted computerisation could make nearly half of jobs redundant within 10 to 20 years. Office work and service roles, they wrote, were particularly at risk. But almost nothing is impervious to automation.

The idea of “computing” a legal obligation may seem strange at the outset, but we already enjoy—-or endure-—it daily. For example, a DVD may only be licensed for play in the US and Europe, and then be “coded” so it can only play in those regions and not others. Were a human playing the DVD for you, he might demand a copy of the DVD’s terms of use and receipt, to see if it was authorized for playing in a given area. Computers need such a term translated into a language they can “understand.” More precisely, the legal terms embedded in the DVD must lead to predictable reactions from the hardware that encounters them. From Lessig to Virilio, the lesson is clear: “architectural regimes become computational, and vice versa.”

So certainly, to the extent lawyers are presently doing rather simple tasks, computation can replace them. But Frey & Osborne also identify barriers to successful automation:

1. Perception and manipulation tasks. Robots are still unable to match the depth and breadth of human perception.
2. Creative intelligence tasks. The psychological processes underlying human creativity are difficult to specify.
3. Social intelligence tasks. Human social intelligence is important in a wide range of work tasks, such as those involving negotiation, persuasion and care. (26)

Frey & Osborne only explicitly discuss legal research and document review (for example, identification and isolation among mass document collections) as easily automatable. They concede that “the computerisation of legal research will complement the work of lawyers” (17). They acknowledge that “for the work of lawyers to be fully automated, engineering bottlenecks to creative and social intelligence will need to be overcome.” In the end, they actually categorize “legal” careers as having a “low risk” of “computerization” (37).

The View from AI & Labor Economics

Those familiar with the smarter voices on this topic, like our guest blogger Harry Surden, would not be surprised. There is a world of difference between computation as substitution for attorneys, and computation as complement. The latter increases lawyers’ private income and (if properly deployed) contribution to society. That’s one reason I helped devise the course Health Data and Advocacy at Seton Hall (co-taught with a statistician and data visualization expert), and why I continue to teach (and research) the law of electronic health records in my seminar Health Information, Privacy, and Innovation, now that I’m at Maryland. As Surden observes, “many of the tasks performed by attorneys do appear to require the type of higher order intellectual skills that are beyond the capability of current techniques.” But they can be complemented by an awareness of rapid advances in software, apps, and data analysis.
Read More

Konczal on Piketty

There are a number of excellent reviews of Piketty out there; to the 14 Brad Delong collected, I’d add James K. Galbraith and Paul Krugman as well. As a former vox clamantis in deserto, I’m happy to see them. Today Mike Konczal weighs in, with a Foucauldian take:

As Foucault argued, the ability of social science to know something is the ability to anthropologize it, a power to define it. As such, it becomes a problem to be solved, a question needing an answer, something to be put on a grid of intelligibility, and a domain of expertise that exerts power over what it studies. With Piketty’s Capital, this process is now being extended to the rich and the elite. Understanding how the elite become what they are, and how their wealth perpetuates itself, is now a hot topic of scientific inquiry.

Many have tried to figure out why the rich are freaking out these days. Their wealth was saved from the financial panic, they are having a very excellent recovery, and they are poised to reap even greater gains going forward. Perhaps they are noticing that the dominant narratives about their role in society—-avatars of success, job creators for the common good, innovators for social betterment, problem-solving philanthropists—-are being replaced with a social science narrative where they are a problem to be studied. They are still in control, but right to be worried.

Joanne Barkan’s debunking of philanthrocapitalism is part of that story; I’d also expect to see much more reporting from Lee Fang and Republic Report on the tangled interests behind primary challenges and much think tank advocacy. Some may even suggest that children be taught the names of local billionaires, rather than those of the governor and top legislative officials, to understand how politics works. The Pikettian moment marks the inflection point when extreme wealth can’t simply be written off as some ancillary feature of our political economy, but rather, as one of its motivating forces.

0

Economic Dynamics and Economic Justice: Making Law Catastrophic, Middling, or Better?

Contrary to Livermore,’s post,  in my view Driesen’s book is particularly powerful as a window into the  profound absurdity and destructiveness of the neoclassical economic framework, rather than as a middle-ground tweaking some of its techniques.  Driesen’s economic dynamics lens makes a more important contribution than many contemporary legal variations on neoclassical economic themes by shifting some major assumptions, though this book does not explore that altered terrain as far as it might.

At first glance, Driesen’s foregrounding of the “dynamic” question of change over time may, as Livermore suggests, seem to be consistent with the basic premise of neoclassical law and economics:   that incentives matter, and that law should focus ex ante, looking forward at those effects.   A closer look through Driesen’s economic dynamics lens reveals how law and economics tends to instead take a covert ex post view that enshrines some snapshots of the status quo as a neutral baseline.  The focus on “efficiency” – on maximizing an abstract pie of “welfare”  given existing constraints —  constructs the consequences of law as essentially fixed by other people’s private choices, beyond the power and politics of the policy analyst and government, without consideration of how past and present and future rights or wrongs constrain or enable those choices.  In this neoclassical view, the job of law is narrowed to the technical task of measuring some imagined sum of these individual preferences shaped through rational microeconomic bargains that represent a middling stasis of existing values and resources, reached through tough tradeoffs that nonetheless promise to constantly bring us toward that glimmering goal of maximizing overall societal gain (“welfare”) from scarce resources.

Driesen reverses that frame by focusing on complex change over time as the main thing we can know with certainty.  In the economic dynamic vision, “law creates a temporally extended commitment to a better future.” (Driesen p. 52). Read More

Industrial Policy for Big Data

If you are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan, data brokers are probably going to assume you’re heavier than average. We know that drug companies may use that data to recruit research subjects.  Marketers could utilize the data to target ads for diet aids, or for types of food that research reveals to be particularly favored by people who are childless, shop for clothing online, spend a lot on cable TV, and drive a minivan.

We may also reasonably assume that the data can be put to darker purposes: for example, to offer credit on worse terms to the obese (stereotype-driven assessment of looks and abilities reigns from Silicon Valley to experimental labs).  And perhaps some day it will be put to higher purposes: for example, identifying “obesity clusters” that might be linked to overexposure to some contaminant

To summarize: let’s roughly rank these biosurveillance goals as: 

1) Curing illness or precursors to illness (identifying the obesity cluster; clinical trial recruitment)

2) Helping match those offering products to those wanting them (food marketing)

3) Promoting the classification and de facto punishment of certain groups (identifying a certain class as worse credit risks)

Read More

0

A Slower Boat From China: Pilotless Ships and Changes to Labor and the Environment

A slower but powerful change is coming to a less familiar part of transportation: shipping. The Economist Tech Quarterly headline on Ghost Ships caught my attention because I know the term from piracy and a script I wrote about the subject. Ghost ships in modern terms refer to ships where a pirate crew has gotten rid of the crew, painted a new name on the ship, and/or set it adrift. The new ghost ships will also lack a crew but for a different reason. The autonomous cargo vessels the article describes are an extension of insights from autonomous cars. The returns to this shift could be as important. Shipping has operator errors: “Most accidents at sea are the result of human error, just as they are in cars and planes.” And costs will come down. Not only would a ship not need a pilot; it may not need a crew.

With pilotless ships, a company could almost eliminate the crew. Costs drop not only for labor but for fuel, because ships could move slower for certain goods. “By some accounts, a 30% reduction in speed by a bulk carrier can save around 50% in fuel.” That saving is lost when paying for people and a ship that has to house and feed people. Plus less fuel burnt should result in environmental benefits. And as the article notes, there is a piracy connection. The human cost of piracy would go down quite a bit. I suppose pirates could still try and take over a ship. But holding the crew hostage would not be an issue and so retaking a ship is simpler. Plus I can imagine that a ship going off course and controlled from afar may be more difficult to commandeer. A pirate might not be able to restart engines or take the ship to destinations unknown. The shore control could have a kill switch so that the ship is useless.

As with my thoughts on driverless cars, the new labor will be those who can operate the ship by remote. A shipping center could house experts to monitor the ships and take over as needed. Instead of months at sea, sailors would be, hmm, landlubbers. Not sure I like the sound off that, but then like has nothing to do with what the future is.