Site Meter

Author: Frank Pasquale

A Pithy Rendering of the New Political Economy

I remember reading Raymond Geuss’s The Idea of a Critical Theory in graduate school and finding it a clear, compelling work. Geuss reflects on the book in a recent essay, offering the following summation of our economic predicament:

What the 1980s and 1990s had in store for us. . . was the successive implementation of a series of financial gimmicks which created financial bubbles and allowed the illusion of increasing growth for the majority of the population to be maintained for a while. . . . [T]he system began to collapse in 2007 and 2008. Catastrophe was averted only by a bizarre . . . set of political interventions in the Western economies–interventions that have correctly been described as “socialism for the rich”: defaulting banks and failing industries were propped up by huge public subsidies, private debts were taken over by the state and profits continued to flow to private investors. This structure, which certainly bears no similarity whatsoever to the ways in which proponents of “capitalism” have described their favored arrangements, seems to give us the worst of all available worlds. . . .

[T]he forms of economic regulation that had been introduced during the Great Depression of the 1930s and had stood the West in good stead for over forty years were gradually relaxed or abolished during the 1980s. Social welfare systems that had gradually been developed came under pressure and began to be dismantled; public services were reduced or “privatized”; infrastructure began to crumble. Inequality, poverty and homelessness grew.

Thus the current trend of corporate profits without widespread prosperity.

One of the very few contemporary economists up to responding to these trends is Mariana Mazzucato, who teaches that any account of value extraction has to be premised on an account of value creation. I’ll be blogging on her work’s relevance to IP, tax, and other policy over the rest of the month. For now, I highly recommend her contribution to the panel “How to Change the Post-Crash Economy,” at the RSA.

The Assault on Journalism in Ferguson, Missouri

The city of Ferguson, Missouri now looks like a war zone. Rapidly escalating responses to protest by a militarized police force have created dangerous conditions. About the only defense people have is some public attention to their plight. And now even that is being shut down by a series of intimidation tactics. Consider the following:

1) As the Washington Post states, its “reporter Wesley Lowery was detained by police on Wednesday while reporting on the unrest in Ferguson, Mo., following the fatal shooting of unarmed teen Michael Brown by police over the weekend.” Huffington Post reporter Ryan Reilly had his head slammed against glass as he attempted to report on police action.

2) U.S. Courts of Appeals have affirmed the right to record the police. The Justice Department has offered clear, recent guidance on the topic.

3) As the Post’s Executive Editor has observed, the information blackout has been so pervasive that we are not even allowed to know who is executing it:

[Lowery was] illegally instructed to stop taking video of officers. Then he followed officers’ instructions to leave a McDonald’s — and after contradictory instructions on how to exit, he was slammed against a soda machine and then handcuffed. That behavior was wholly unwarranted and an assault on the freedom of the press to cover the news. The physical risk to Wesley himself is obvious and outrageous. After being placed in a holding cell, he was released with no charges and no explanation. He was denied information about the names and badge numbers of those who arrested him.

This is consistent with other anti-transparency measures in the dispute.

4) Police brutality has been a pervasive problem. We can only start a public conversation on the magnitude of the problem if people have the unfettered right to record law enforcement practices.

5) Many people have reported that police in Ferguson told them to turn off cameras and recording devices. Police refused to answer basic questions. Even major media organizations were told to leave.

6) Police tear-gassed journalists from Al Jazeera and local TV crews.

7) Local leaders are not safe, either. Both an alderman and a state senator were detained and tear-gassed.

The United States has not exactly distinguished itself in its treatment of journalists. In 2012, it fell to 47th in Reporters Without Borders’ Press Freedom Index, well behind countries like Surinam, Mali, and Slovakia, largely due to police harassment of photographers and videographers at Occupy Wall Street protests. How far should it fall if police can basically decide unilaterally to make entire cities “no First Amendment zones”? How can the US warn other countries not to “take military action against protesters,” if it allows an out-of-control force like Ferguson’s to plot a media blackout? This is a policy of order-at-all-costs, even if it means “law enforcers” breaking the law with impunity.

I will have more to say later on the underlying dispute (well covered by Mary Ann Franks and Jamelle Bouie). For now, all I can say is: we should be deeply worried about the broader campaign to create “urban battlespaces” in American cities. This is a dangerous amalgamation of police and military functions, thoughtlessly accelerated by the distribution of war-fighting equipment to local law enforcers around the country. Minimal standards of accountability require free access by the press.

How We’ll Know the Wikimedia Foundation is Serious About a Right to Remember

The “right to be forgotten” ruling in Europe has provoked a firestorm of protest from internet behemoths and some civil libertarians.* Few seem very familiar with classic privacy laws that govern automated data systems. Characteristic rhetoric comes from the Wikimedia Foundation:

The foundation which operates Wikipedia has issued new criticism of the “right to be forgotten” ruling, calling it “unforgivable censorship.” Speaking at the announcement of the Wikimedia Foundation’s first-ever transparency report in London, Wikipedia founder Jimmy Wales said the public had the “right to remember”.

I’m skeptical of this line of reasoning. But let’s take it at face value for now. How far should the right to remember extend? Consider the importance of automated ranking and rating systems in daily life: in contexts ranging from credit scores to terrorism risk assessments to Google search rankings. Do we have a “right to remember” all of these-—to, say, fully review the record of automated processing years (or even decades) after it happens?

If the Wikimedia Foundation is serious about advocating a right to remember, it will apply the right to the key internet companies organizing online life for us. I’m not saying “open up all the algorithms now”—-I respect the commercial rationale for trade secrecy. But years or decades after the key decisions are made, the value of the algorithms fades. Data involved could be anonymized. And just as Asssange’s and Snowden’s revelations have been filtered through trusted intermediaries to protect vital interests, so too could an archive of Google or Facebook or Amazon ranking and rating decisions be limited to qualified researchers or journalists. Surely public knowledge about how exactly Google ranked and annotated Holocaust denial sites is at least as important as the right of a search engine to, say, distribute hacked medical records or credit card numbers.

So here’s my invitation to Lila Tretikov, Jimmy Wales, and Geoff Brigham: join me in calling for Google to commit to releasing a record of its decisions and data processing to an archive run by a third party, so future historians can understand how one of the most important companies in the world made decisions about how it ordered information. This is simply a bid to assure the preservation of (and access to) critical parts of our cultural, political, and economic history. Indeed, one of the first items I’d like to explore is exactly how Wikipedia itself was ranked so highly by Google at critical points in its history. Historians of Wikipedia deserve to know details about that part of its story. Don’t they have a right to remember?

*For more background, please note: we’ve recently hosted several excellent posts on the European Court of Justice’s interpretation of relevant directives. Though often called a “right to be forgotten,” the ruling in the Google Spain case might better be characterized as the application of due process, privacy, and anti-discrimination norms to automated data processing.

MarkelFest! at SEALS

Howard Wasserman and the team at Prawfs have organized a get-together at SEALS in memory of Dan Markel for this Saturday, and we at CoOp are honored to co-sponsor it. I’m sure this is the first of many conferences where Dan’s memory will be celebrated. Full details are here.

Failed Fiduciaries: Pension Funds’ Alliances with Private Equity Firms

As Yves Smith has reported, “the SEC has now announced that more than 50 percent of private equity firms it has audited have engaged in serious infractions of securities laws.” Smith, along with attorney Timothy Y. Fong, has been trying to shed light on PE arrangements for months, but has often been blocked by the very entities taken advantage of by the PE firms. As Smith concludes:

[I]nvestors have done a poor job of negotiating agreements so that they protect their interests and have done little if any monitoring once they’ve committed to a particular fund. As we’ll chronicle over the next few days, anyone who reads these agreements against the disclosures that investors are now required to make to the SEC and the public in their annual Form ADV can readily find numerous abuses. . . . But rather than live up to their fiduciary duties, pension funds that have invested in private equity funds haven’t merely sat pat as they were fleeced; even worse, they’ve been staunch defenders of the private equity industry’s special pleadings.

The SEC Chair has also harshly criticized the arrangements. States are making token efforts to reform matters after being exposed, but don’t expect much substantive to be done. The key problem is the distinction between those running pension funds, and what Jennifer Taub calls the “ultimate investors“–those whose accounts are being managed. Until their interests are better aligned, expect to see more sweetheart deals via “alternative investments.”

Beyond Too Big to Fail

After documenting extraordinary rent-seeking (and gaining) by financial institutions, John Quiggin comes to the following conclusion:

[A]ny serious attempt to stabilize the macroeconomy and return to sustainable improvements in living standards must involve a drastic reduction in the size and economic weight of the financial sector. Attempts at regulating derivatives markets have proved utterly futile in the face of massive incentives to take profitable risks, backed up by the guarantee of a government bailout.

The only remaining option is to separate these markets entirely from the socially useful parts of the financial system, then let them fail. Publicly guaranteed banks should be banned from engaging in all but the most basic financial transactions, such as issuing loans and bonds and accepting deposits. In particular, banks should be prohibited from doing any business with institutions engaged in speculative finance such as trade in derivatives. Such institutions should be required to raise all their funds directly from investors, on a “buyer beware” basis, and should never be bailed out, directly or indirectly, when they get into trouble.

The theme of separating out the utility-like, payment systems management functions of banks, from speculative finance, is something I’ve been hearing in a good deal of British thought on financial regulation.  I expect American policy makers to catch up soon.

 

 

 

 

Finance’s Failures: Lack of Accountability

This week I’ll be highlighting some excellent, recent articles on problems in the US financial sector.  First up is Jennifer Taub’s Reforming the Banks for Good.  On the way to introducing six valuable reforms, Taub notes the following:

 In 2013 JPMorgan Chase (the bank that bought WaMu) agreed to pay $13 billion to the U.S. government related in part to the sale of bad mortgages to government-sponsored housing enterprises Fannie Mae and Freddie Mac. The settlement was heralded by the government as the largest ever with a single institution in U.S. history. But the board of directors at JPMorgan Chase awarded CEO Jamie Dimon a 74 percent pay increase (to $20 million) that same year. Dennis Kelleher, president of Better Markets, called this move “as shocking as it is indefensible,” noting, “It’s a real slap in the face to the [Department of Justice] and financial regulators who think that the actions that they’ve taken in the last year have been appropriate to punish and deter JPMorgan Chase.” It is hard not to conclude that those who helped create a global financial calamity have not and will not suffer personal consequences.

I’m looking forward to reading Brandon Garrett’s Too Big to Jail this fall to explore some systemic responses to the problem. If it’s not solved, fines simply become a cost of doing business. And for too big to fail banks, no mere fine can deter bad conduct. If any particular penalty really endangered a bank, it would just be funneled back to the bank in the form of a bailout.

Social Science in an Era of Corporate Big Data

IsaacWorkingIn my last post, I explored the characteristics of Facebook’s model (i.e., exemplary) users. Today, I want to discuss the model users in the company–i.e., the data scientists who try to build stylized versions of reality (models) based on certain data points and theories. The Facebook emotion experiment is part of a much larger reshaping of social science. To what extent will academics study data driven firms like Facebook, and to what extent will they try to join forces with its own researchers to study others?

Present incentives are clear: collaborate with (rather than develop a critical theory of) big data firms.  As Zeynep Tufekci puts it, “the most valuable datasets have become corporate and proprietary [and] top journals love publishing from them.”  “Big data” has an aura of scientific validity simply because of the velocity, volume, and variety of the phenomena it encompasses. Psychologists certainly must have learned *something* from looking at over 600,000 accounts’ activity, right?

The problem, though is that the corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility.* That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

Grant Getters and Committee Men

Why are journals so interested in this form of research? Why are academics jumping on board? Fortunately, social science has matured to the point that we now have a robust, insightful literature about the nature of social science itself. I know, this probably sounds awfully meta–exactly the type of navel-gazing Senator Coburn would excommunicate from the church of science. But it actually provides a much-needed historical perspective on how power and money shape knowledge. Consider, for instance, the opening of Joel Isaac’s article Tangled Loops, on Cold War social science:

During the first two decades of the Cold War, a new kind of academic figure became prominent in American public life: the credentialed social scientist or expert in the sciences of administration who was also, to use the parlance of the time, a “man of affairs.” Some were academic high-fliers conscripted into government roles in which their intellectual and organizational talents could be exploited. McGeorge Bundy, Walt Rostow, and Robert McNamara are the archetypes of such persons. An overlapping group of scholars became policymakers and political advisers on issues ranging from social welfare provision to nation-building in emerging postcolonial states.

Postwar leaders of the social and administrative sciences such as Talcott Parsons and Herbert Simon were skilled scientific brokers of just this sort: good “committee men,” grant-getters, proponents of interdisciplinary inquiry, and institution-builders. This hard-nosed, suit-wearing, business-like persona was connected to new, technologically refined forms of social science. . . . Antediluvian “social science” was eschewed in favour of mathematical, behavioural, and systems-based approaches to “human relations” such as operations research, behavioral science, game theory, systems theory, and cognitive science.

One of Isaac’s major contributions in that piece is to interpret the social science coming out of the academy (and entities like RAND) as a cultural practice: “Insofar as theories involve certain forms of practice, they are caught up in worldly, quotidian matters: performances, comportments, training regimes, and so on.” Government leveraged funding to mobilize research to specific ends. To maintain university patronage systems and research centers, leaders had to be on good terms with the grantors. The common goal of strengthening the US economy (and defeating the communist threat) cemented an ideological alliance.

Government still exerts influence in American social and behavioral sciences. But private industry controls critical data sets for the most glamorous, data-driven research. In the Cold War era, “grant getting” may have been the key to economic security, and to securing one’s voice in the university. Today, “exit” options are more important than voice, and what better place to exit to than an internet platform? Thus academic/corporate “flexians” shuttle between the two worlds. Their research cannot be too venal, lest the academy disdain it. But neither can it indulge in, say, critical theory (what would nonprofit social networks look like), just as Cold War social scientists were ill-advised to, say, develop Myrdal’s or Leontief’s theories. There was a lot more money available for the Friedmanite direction economics would, eventually, take.

Intensifying academic precarity also makes the blandishments of corporate data science an “offer one can’t refuse.” Tenured jobs are growing scarcer. As MOOCmongers aspire to deskill and commoditize the academy, industry’s benefits and flexibility grow ever more alluring. Academic IRBs can impose a heavy bureaucratic burden; the corporate world is far more flexible. (Consider all the defenses of the Facebook authored last week which emphasized how little review corporate research has to go through: satisfy the boss, and you’re basically done, no matter how troubling your aims or methods may be in a purely academic context.)

Creating Kinds

So why does all this matter, other than to the quantitatively gifted individuals at the cutting edge of data science? It matters because, in Isaac’s words:

Theories and classifications in the human sciences do not “discover” an independently existing reality; they help, in part, to create it. Much of this comes down to the publicity of knowledge. Insofar as scientific descriptions of people are made available to the public, they may “change how we can think of ourselves, [and] change our sense of self-worth, even how we remember our own past.

It is very hard to develop categories and kinds for internet firms, because they are so secretive about most of their operations. (And make no mistake about the current PR kerfuffle for Facebook: it will lead the company to become ever more secretive about its data science, just as Target started camouflaging its pregnancy-related ads and not talking to reporters after people appeared creeped out by the uncanny accuracy of its natal predictions.) But the data collection of the firms is creating whole new kinds of people—for marketers, for the NSA, and for anyone with the money or connections to access the information.

More likely than not, encoded in Facebook’s database is some new, milder DSM, with categories like the slightly stingy (who need to be induced to buy more); the profligate, who need frugality prompts; the creepy, who need to be hidden in newsfeeds lest they bum out the cool. Our new “Science Mart” creates these new human kinds, but also alters them, as “new sorting and theorizing induces changes in self-conception and in behavior of the people classified.” Perhaps in the future, upon being classified as “slightly depressed” by Facebook, users will see more happy posts. Perhaps the hypomanic will be brought down a bit. Or, perhaps if their state is better for business, it will be cultivated and promoted.

You may think that last possibility unfair, or a mischaracterization of the power of Facebook. But shouldn’t children have been excluded from its emotion experiment? Shouldn’t those whom it suspects may be clinically depressed? Shouldn’t some independent reviewer have asked about those possibilities? Journalists try to reassure us that Facebook is better now than it was 2 years ago. But the power imbalances in social science remain as funding cuts threaten researchers’ autonomy. Until research in general is properly valued, we can expect more psychologists, anthropologists, and data scientists to attune themselves to corporate research agendas, rather than questioning why data about users is so much more available than data about company practices.

Image Note: I’ve inserted a picture of Isaac’s book, which I highly recommend to readers interested in the history of social science.

*I suggested this was a problem in 2010.

Facebook’s Model Users

DontAnthropomorphizePeopleFacebook’s recent pscyhology experiment has raised difficult questions about the ethical standards of data-driven companies, and the universities that collaborate with them. We are still learning exactly who did what before publication. Some are wisely calling for a “People’s Terms of Service” agreement to curb further abuses. Others are more focused on the responsibility to protect research subjects. As Jack Balkin has suggested, we need these massive internet platforms to act as fiduciaries.

The experiment fiasco is just the latest in a long history of ethically troubling decisions at that firm, and several others like it. And the time is long past for serious, international action to impose some basic ethical limits on the business practices these behemoths pursue.

Unfortunately, many in Silicon Valley still barely get what the fuss is about. For them, A/B testing is simply a way of life. Using it to make people feel better or worse is a far cry from, say, manipulating video poker machines to squeeze a few extra dollars out of desperate consumers. “Casino owners do that all the time!”, one can almost hear them rejoin.

Yet there are some revealing similarities between casinos and major internet platforms. Consider this analogy from Rob Horning:

Social media platforms are engineered to be sticky — that is, addictive, as Alexis Madrigal details in [a] post about the “machine zone.” . . . Like video slots, which incite extended periods of “time-on-machine” to assure “continuous gaming productivity” (i.e. money extraction from players), social-media sites are designed to maximize time-on-site, to make their users more valuable to advertisers (Instagram, incidentally, is adding advertising) and to ratchet up user productivity in the form of data sharing and processing that social-media sites reserve the rights to.
 

That’s one reason we get headlines like “Teens Can’t Stop Using Facebook Even Though They Hate It.” There are sociobiological routes to conditioning action. The platforms are constantly shaping us, based on sophisticated psychological profiles.

For Facebook to continue to meet Wall Street’s demands for growth, its user base must grow and/or individual users must become more “productive.” Predictive analytics demands standardization: forecastable estimates of revenue-per-user. The more a person clicks on ads and buys products, the better. Secondarily, the more a person draws other potential ad-clickers in–via clicked-on content, catalyzing discussions, crying for help, whatever–the more valuable they become to the platform. The “model users” gain visibility, subtly instructing by example how to act on the network. They’ll probably never attain the notoriety of a Lei Feng, but the Republic of Facebookistan gladly pays them the currency of attention, as long as the investment pays off for top managers and shareholders.

As more people understand the implications of enjoying Facebook “for free“–i.e., that they are the product of the service–they also see that its real paying customers are advertisers. As Katherine Hayles has stated, the critical question here is: “will ubiquitous computing be coopted as a stalking horse for predatory capitalism, or can we seize the opportunity” to deploy more emancipatory uses of it?  I have expressed faith in the latter possibility, but Facebook continually validates Julie Cohen’s critique of a surveillance-innovation complex.

Facebook’s Hidden Persuaders

hidden-persuadersMajor internet platforms are constantly trying new things out on users, to better change their interfaces. Perhaps they’re interested in changing their users, too. Consider this account of Facebook’s manipulation of its newsfeed:

If you were feeling glum in January 2012, it might not have been you. Facebook ran an experiment on 689,003 users to see if it could manipulate their emotions. One experimental group had stories with positive words like “love” and “nice” filtered out of their News Feeds; another experimental group had stories with negative words like “hurt” and “nasty” filtered out. And indeed, people who saw fewer positive posts created fewer of their own. Facebook made them sad for a psych experiment.

James Grimmelmann suggests some potential legal and ethical pitfalls. Julie Cohen has dissected the larger political economy of modulation. For now, I’d just like to present a subtle shift in Silicon Valley rhetoric:

c. 2008: “How dare you suggest we’d manipulate our users! What a paranoid view.”
c. 2014: “Of course we manipulate users! That’s how we optimize time-on-machine.”

There are many cards in the denialists’ deck. An earlier Facebook-inspired study warns of “greater spikes in global emotion that could generate increased volatility in everything from political systems to financial markets.” Perhaps social networks will take on the dampening of inconvenient emotions as a public service. For a few glimpses of the road ahead, take a look at Bernard Harcourt (on Zunzuneo), Jonathan Zittrain, Robert Epstein, and N. Katherine Hayles.