Site Meter

Author: Frank Pasquale

Failed Fiduciaries: Pension Funds’ Alliances with Private Equity Firms

As Yves Smith has reported, “the SEC has now announced that more than 50 percent of private equity firms it has audited have engaged in serious infractions of securities laws.” Smith, along with attorney Timothy Y. Fong, has been trying to shed light on PE arrangements for months, but has often been blocked by the very entities taken advantage of by the PE firms. As Smith concludes:

[I]nvestors have done a poor job of negotiating agreements so that they protect their interests and have done little if any monitoring once they’ve committed to a particular fund. As we’ll chronicle over the next few days, anyone who reads these agreements against the disclosures that investors are now required to make to the SEC and the public in their annual Form ADV can readily find numerous abuses. . . . But rather than live up to their fiduciary duties, pension funds that have invested in private equity funds haven’t merely sat pat as they were fleeced; even worse, they’ve been staunch defenders of the private equity industry’s special pleadings.

The SEC Chair has also harshly criticized the arrangements. States are making token efforts to reform matters after being exposed, but don’t expect much substantive to be done. The key problem is the distinction between those running pension funds, and what Jennifer Taub calls the “ultimate investors“–those whose accounts are being managed. Until their interests are better aligned, expect to see more sweetheart deals via “alternative investments.”

Beyond Too Big to Fail

After documenting extraordinary rent-seeking (and gaining) by financial institutions, John Quiggin comes to the following conclusion:

[A]ny serious attempt to stabilize the macroeconomy and return to sustainable improvements in living standards must involve a drastic reduction in the size and economic weight of the financial sector. Attempts at regulating derivatives markets have proved utterly futile in the face of massive incentives to take profitable risks, backed up by the guarantee of a government bailout.

The only remaining option is to separate these markets entirely from the socially useful parts of the financial system, then let them fail. Publicly guaranteed banks should be banned from engaging in all but the most basic financial transactions, such as issuing loans and bonds and accepting deposits. In particular, banks should be prohibited from doing any business with institutions engaged in speculative finance such as trade in derivatives. Such institutions should be required to raise all their funds directly from investors, on a “buyer beware” basis, and should never be bailed out, directly or indirectly, when they get into trouble.

The theme of separating out the utility-like, payment systems management functions of banks, from speculative finance, is something I’ve been hearing in a good deal of British thought on financial regulation.  I expect American policy makers to catch up soon.

 

 

 

 

Finance’s Failures: Lack of Accountability

This week I’ll be highlighting some excellent, recent articles on problems in the US financial sector.  First up is Jennifer Taub’s Reforming the Banks for Good.  On the way to introducing six valuable reforms, Taub notes the following:

 In 2013 JPMorgan Chase (the bank that bought WaMu) agreed to pay $13 billion to the U.S. government related in part to the sale of bad mortgages to government-sponsored housing enterprises Fannie Mae and Freddie Mac. The settlement was heralded by the government as the largest ever with a single institution in U.S. history. But the board of directors at JPMorgan Chase awarded CEO Jamie Dimon a 74 percent pay increase (to $20 million) that same year. Dennis Kelleher, president of Better Markets, called this move “as shocking as it is indefensible,” noting, “It’s a real slap in the face to the [Department of Justice] and financial regulators who think that the actions that they’ve taken in the last year have been appropriate to punish and deter JPMorgan Chase.” It is hard not to conclude that those who helped create a global financial calamity have not and will not suffer personal consequences.

I’m looking forward to reading Brandon Garrett’s Too Big to Jail this fall to explore some systemic responses to the problem. If it’s not solved, fines simply become a cost of doing business. And for too big to fail banks, no mere fine can deter bad conduct. If any particular penalty really endangered a bank, it would just be funneled back to the bank in the form of a bailout.

Social Science in an Era of Corporate Big Data

IsaacWorkingIn my last post, I explored the characteristics of Facebook’s model (i.e., exemplary) users. Today, I want to discuss the model users in the company–i.e., the data scientists who try to build stylized versions of reality (models) based on certain data points and theories. The Facebook emotion experiment is part of a much larger reshaping of social science. To what extent will academics study data driven firms like Facebook, and to what extent will they try to join forces with its own researchers to study others?

Present incentives are clear: collaborate with (rather than develop a critical theory of) big data firms.  As Zeynep Tufekci puts it, “the most valuable datasets have become corporate and proprietary [and] top journals love publishing from them.”  “Big data” has an aura of scientific validity simply because of the velocity, volume, and variety of the phenomena it encompasses. Psychologists certainly must have learned *something* from looking at over 600,000 accounts’ activity, right?

The problem, though is that the corporate “science” of manipulation is a far cry from academic science’s ethics of openness and reproducibility.* That’s already led to some embarrassments in the crossover from corporate to academic modeling (such as Google’s flu trends failures). Researchers within Facebook worried about multiple experiments being performed at once on individual users, which might compromise the results of any one study. Standardized review could have prevented that. But, true to the Silicon Valley ethic of “move fast and break things,” speed was paramount: “There’s no review process. Anyone…could run a test…trying to alter peoples’ behavior,” said one former Facebook data scientist.

Grant Getters and Committee Men

Why are journals so interested in this form of research? Why are academics jumping on board? Fortunately, social science has matured to the point that we now have a robust, insightful literature about the nature of social science itself. I know, this probably sounds awfully meta–exactly the type of navel-gazing Senator Coburn would excommunicate from the church of science. But it actually provides a much-needed historical perspective on how power and money shape knowledge. Consider, for instance, the opening of Joel Isaac’s article Tangled Loops, on Cold War social science:

During the first two decades of the Cold War, a new kind of academic figure became prominent in American public life: the credentialed social scientist or expert in the sciences of administration who was also, to use the parlance of the time, a “man of affairs.” Some were academic high-fliers conscripted into government roles in which their intellectual and organizational talents could be exploited. McGeorge Bundy, Walt Rostow, and Robert McNamara are the archetypes of such persons. An overlapping group of scholars became policymakers and political advisers on issues ranging from social welfare provision to nation-building in emerging postcolonial states.

Postwar leaders of the social and administrative sciences such as Talcott Parsons and Herbert Simon were skilled scientific brokers of just this sort: good “committee men,” grant-getters, proponents of interdisciplinary inquiry, and institution-builders. This hard-nosed, suit-wearing, business-like persona was connected to new, technologically refined forms of social science. . . . Antediluvian “social science” was eschewed in favour of mathematical, behavioural, and systems-based approaches to “human relations” such as operations research, behavioral science, game theory, systems theory, and cognitive science.

One of Isaac’s major contributions in that piece is to interpret the social science coming out of the academy (and entities like RAND) as a cultural practice: “Insofar as theories involve certain forms of practice, they are caught up in worldly, quotidian matters: performances, comportments, training regimes, and so on.” Government leveraged funding to mobilize research to specific ends. To maintain university patronage systems and research centers, leaders had to be on good terms with the grantors. The common goal of strengthening the US economy (and defeating the communist threat) cemented an ideological alliance.

Government still exerts influence in American social and behavioral sciences. But private industry controls critical data sets for the most glamorous, data-driven research. In the Cold War era, “grant getting” may have been the key to economic security, and to securing one’s voice in the university. Today, “exit” options are more important than voice, and what better place to exit to than an internet platform? Thus academic/corporate “flexians” shuttle between the two worlds. Their research cannot be too venal, lest the academy disdain it. But neither can it indulge in, say, critical theory (what would nonprofit social networks look like), just as Cold War social scientists were ill-advised to, say, develop Myrdal’s or Leontief’s theories. There was a lot more money available for the Friedmanite direction economics would, eventually, take.

Intensifying academic precarity also makes the blandishments of corporate data science an “offer one can’t refuse.” Tenured jobs are growing scarcer. As MOOCmongers aspire to deskill and commoditize the academy, industry’s benefits and flexibility grow ever more alluring. Academic IRBs can impose a heavy bureaucratic burden; the corporate world is far more flexible. (Consider all the defenses of the Facebook authored last week which emphasized how little review corporate research has to go through: satisfy the boss, and you’re basically done, no matter how troubling your aims or methods may be in a purely academic context.)

Creating Kinds

So why does all this matter, other than to the quantitatively gifted individuals at the cutting edge of data science? It matters because, in Isaac’s words:

Theories and classifications in the human sciences do not “discover” an independently existing reality; they help, in part, to create it. Much of this comes down to the publicity of knowledge. Insofar as scientific descriptions of people are made available to the public, they may “change how we can think of ourselves, [and] change our sense of self-worth, even how we remember our own past.

It is very hard to develop categories and kinds for internet firms, because they are so secretive about most of their operations. (And make no mistake about the current PR kerfuffle for Facebook: it will lead the company to become ever more secretive about its data science, just as Target started camouflaging its pregnancy-related ads and not talking to reporters after people appeared creeped out by the uncanny accuracy of its natal predictions.) But the data collection of the firms is creating whole new kinds of people—for marketers, for the NSA, and for anyone with the money or connections to access the information.

More likely than not, encoded in Facebook’s database is some new, milder DSM, with categories like the slightly stingy (who need to be induced to buy more); the profligate, who need frugality prompts; the creepy, who need to be hidden in newsfeeds lest they bum out the cool. Our new “Science Mart” creates these new human kinds, but also alters them, as “new sorting and theorizing induces changes in self-conception and in behavior of the people classified.” Perhaps in the future, upon being classified as “slightly depressed” by Facebook, users will see more happy posts. Perhaps the hypomanic will be brought down a bit. Or, perhaps if their state is better for business, it will be cultivated and promoted.

You may think that last possibility unfair, or a mischaracterization of the power of Facebook. But shouldn’t children have been excluded from its emotion experiment? Shouldn’t those whom it suspects may be clinically depressed? Shouldn’t some independent reviewer have asked about those possibilities? Journalists try to reassure us that Facebook is better now than it was 2 years ago. But the power imbalances in social science remain as funding cuts threaten researchers’ autonomy. Until research in general is properly valued, we can expect more psychologists, anthropologists, and data scientists to attune themselves to corporate research agendas, rather than questioning why data about users is so much more available than data about company practices.

Image Note: I’ve inserted a picture of Isaac’s book, which I highly recommend to readers interested in the history of social science.

*I suggested this was a problem in 2010.

Facebook’s Model Users

DontAnthropomorphizePeopleFacebook’s recent pscyhology experiment has raised difficult questions about the ethical standards of data-driven companies, and the universities that collaborate with them. We are still learning exactly who did what before publication. Some are wisely calling for a “People’s Terms of Service” agreement to curb further abuses. Others are more focused on the responsibility to protect research subjects. As Jack Balkin has suggested, we need these massive internet platforms to act as fiduciaries.

The experiment fiasco is just the latest in a long history of ethically troubling decisions at that firm, and several others like it. And the time is long past for serious, international action to impose some basic ethical limits on the business practices these behemoths pursue.

Unfortunately, many in Silicon Valley still barely get what the fuss is about. For them, A/B testing is simply a way of life. Using it to make people feel better or worse is a far cry from, say, manipulating video poker machines to squeeze a few extra dollars out of desperate consumers. “Casino owners do that all the time!”, one can almost hear them rejoin.

Yet there are some revealing similarities between casinos and major internet platforms. Consider this analogy from Rob Horning:

Social media platforms are engineered to be sticky — that is, addictive, as Alexis Madrigal details in [a] post about the “machine zone.” . . . Like video slots, which incite extended periods of “time-on-machine” to assure “continuous gaming productivity” (i.e. money extraction from players), social-media sites are designed to maximize time-on-site, to make their users more valuable to advertisers (Instagram, incidentally, is adding advertising) and to ratchet up user productivity in the form of data sharing and processing that social-media sites reserve the rights to.
 

That’s one reason we get headlines like “Teens Can’t Stop Using Facebook Even Though They Hate It.” There are sociobiological routes to conditioning action. The platforms are constantly shaping us, based on sophisticated psychological profiles.

For Facebook to continue to meet Wall Street’s demands for growth, its user base must grow and/or individual users must become more “productive.” Predictive analytics demands standardization: forecastable estimates of revenue-per-user. The more a person clicks on ads and buys products, the better. Secondarily, the more a person draws other potential ad-clickers in–via clicked-on content, catalyzing discussions, crying for help, whatever–the more valuable they become to the platform. The “model users” gain visibility, subtly instructing by example how to act on the network. They’ll probably never attain the notoriety of a Lei Feng, but the Republic of Facebookistan gladly pays them the currency of attention, as long as the investment pays off for top managers and shareholders.

As more people understand the implications of enjoying Facebook “for free“–i.e., that they are the product of the service–they also see that its real paying customers are advertisers. As Katherine Hayles has stated, the critical question here is: “will ubiquitous computing be coopted as a stalking horse for predatory capitalism, or can we seize the opportunity” to deploy more emancipatory uses of it?  I have expressed faith in the latter possibility, but Facebook continually validates Julie Cohen’s critique of a surveillance-innovation complex.

Facebook’s Hidden Persuaders

hidden-persuadersMajor internet platforms are constantly trying new things out on users, to better change their interfaces. Perhaps they’re interested in changing their users, too. Consider this account of Facebook’s manipulation of its newsfeed:

If you were feeling glum in January 2012, it might not have been you. Facebook ran an experiment on 689,003 users to see if it could manipulate their emotions. One experimental group had stories with positive words like “love” and “nice” filtered out of their News Feeds; another experimental group had stories with negative words like “hurt” and “nasty” filtered out. And indeed, people who saw fewer positive posts created fewer of their own. Facebook made them sad for a psych experiment.

James Grimmelmann suggests some potential legal and ethical pitfalls. Julie Cohen has dissected the larger political economy of modulation. For now, I’d just like to present a subtle shift in Silicon Valley rhetoric:

c. 2008: “How dare you suggest we’d manipulate our users! What a paranoid view.”
c. 2014: “Of course we manipulate users! That’s how we optimize time-on-machine.”

There are many cards in the denialists’ deck. An earlier Facebook-inspired study warns of “greater spikes in global emotion that could generate increased volatility in everything from political systems to financial markets.” Perhaps social networks will take on the dampening of inconvenient emotions as a public service. For a few glimpses of the road ahead, take a look at Bernard Harcourt (on Zunzuneo), Jonathan Zittrain, Robert Epstein, and N. Katherine Hayles.

A More Nuanced View of Legal Automation

A Guardian writer has updated Farhad Manjoo’s classic report, “Will a Robot Steal Your Job?” Of course, lawyers are in the crosshairs. As Julius Stone noted in The Legal System and Lawyers’ Reasoning, scholars have addressed the automation of legal processes since at least the 1960s. Al Gore now says that a “new algorithm . . . makes it possible for one first year lawyer to do the same amount of legal research that used to require 500.”* But when one actually reads the studies trumpeted by the prophets of disruption, a more nuanced perspective emerges.

Let’s start with the experts cited first in the article:

Oxford academics Carl Benedikt Frey and Michael A Osborne have predicted computerisation could make nearly half of jobs redundant within 10 to 20 years. Office work and service roles, they wrote, were particularly at risk. But almost nothing is impervious to automation.

The idea of “computing” a legal obligation may seem strange at the outset, but we already enjoy—-or endure-—it daily. For example, a DVD may only be licensed for play in the US and Europe, and then be “coded” so it can only play in those regions and not others. Were a human playing the DVD for you, he might demand a copy of the DVD’s terms of use and receipt, to see if it was authorized for playing in a given area. Computers need such a term translated into a language they can “understand.” More precisely, the legal terms embedded in the DVD must lead to predictable reactions from the hardware that encounters them. From Lessig to Virilio, the lesson is clear: “architectural regimes become computational, and vice versa.”

So certainly, to the extent lawyers are presently doing rather simple tasks, computation can replace them. But Frey & Osborne also identify barriers to successful automation:

1. Perception and manipulation tasks. Robots are still unable to match the depth and breadth of human perception.
2. Creative intelligence tasks. The psychological processes underlying human creativity are difficult to specify.
3. Social intelligence tasks. Human social intelligence is important in a wide range of work tasks, such as those involving negotiation, persuasion and care. (26)

Frey & Osborne only explicitly discuss legal research and document review (for example, identification and isolation among mass document collections) as easily automatable. They concede that “the computerisation of legal research will complement the work of lawyers” (17). They acknowledge that “for the work of lawyers to be fully automated, engineering bottlenecks to creative and social intelligence will need to be overcome.” In the end, they actually categorize “legal” careers as having a “low risk” of “computerization” (37).

The View from AI & Labor Economics

Those familiar with the smarter voices on this topic, like our guest blogger Harry Surden, would not be surprised. There is a world of difference between computation as substitution for attorneys, and computation as complement. The latter increases lawyers’ private income and (if properly deployed) contribution to society. That’s one reason I helped devise the course Health Data and Advocacy at Seton Hall (co-taught with a statistician and data visualization expert), and why I continue to teach (and research) the law of electronic health records in my seminar Health Information, Privacy, and Innovation, now that I’m at Maryland. As Surden observes, “many of the tasks performed by attorneys do appear to require the type of higher order intellectual skills that are beyond the capability of current techniques.” But they can be complemented by an awareness of rapid advances in software, apps, and data analysis.
Read More

Disruption: A Tarnished Brand

I’ve been hearing for years that law needs to be “disrupted.” “Legal rebels” and “reinventors” of law may want to take a look at Jill Lepore’s devastating account of Clay Christensen’s development of that buzzword. Lepore surfaces the ideology behind it, and suggests some shoddy research:

Christensen’s sources are often dubious and his logic questionable. His single citation for his investigation of the “disruptive transition from mechanical to electronic motor controls,” in which he identifies the Allen-Bradley Company as triumphing over four rivals, is a book called “The Bradley Legacy,” an account published by a foundation established by the company’s founders. This is akin to calling an actor the greatest talent in a generation after interviewing his publicist.

Critiques of Christensen’s forays into health and education are common, but Lepore takes the battle to his home territory of manufacturing, debunking “success stories” trumpeted by Christensen. She also exposes the continuing health of firms the Christensenites deemed doomed. For Lepore, disruption is less a scientific theory of management than a thin ideological veneer for pushing short-sighted, immature, and venal business models onto startups:

They are told that they should be reckless and ruthless. Their investors . . . tell them that the world is a terrifying place, moving at a devastating pace. “Today I run a venture capital firm and back the next generation of innovators who are, as I was throughout my earlier career, dead-focused on eating your lunch,” [one] writes. His job appears to be to convince a generation of people who want to do good and do well to learn, instead, remorselessness. Forget rules, obligations, your conscience, loyalty, a sense of the commonweal. . . . Don’t look back. Never pause. Disrupt or be disrupted.

In other words, disruption is a slick rebranding of the B-School Machiavellianism that brought us “systemic deregulation and financialization.” If you’re wondering why many top business scholars went from “higher aims to hired hands,” Lepore’s essay is a great place to start.
Read More

Endless Replay, Continued

My post on surveillance creating an “endless replay” has some more playful applications, as well. Consider the “Groundhog Date” now marketed by Match.com: “Partnering with an LA based company … that matches people to dates using facial recognition software, users will be asked to send in pictures of their exes, which will be used to determine who they will be matched with on the site.” Critics are aghast at the prospect of outsourcing our humanity. However habitual our actions may be, no one wants to be typecast as a typecaster.

The Logic of Extraction

Despite happy talk from corporate chieftains (and their friends in government), deep flaws in the American economy are becoming harder to ignore. Two recent articles have been particularly insightful.

First, despite America’s self-image as a crucible of cutthroat competition, our top businesses specialize in eliminating rivals. As Lina Khan and Sandeep Vaheesan observe,

Since the early 1980s, executives and financiers have consolidated control over dozens of industries across the U.S. economy. . . . [This strategy] has even become a basic formula for successful investing. Goldman Sachs in February published a research memo advising investors to seek out “oligopolistic market structure[s]” in which “a smaller set of relevant peers faces lower competitive intensity, greater stickiness and pricing power with customers due to reduced choice, scale cost benefits including stronger leverage over suppliers, and higher barriers to new entrants all at once.” Goldman went on to highlight a few markets, including beer, where dramatic consolidation over the past decade has enabled dominant companies to use their market power to extract more from suppliers and consumers — and thereby enrich investors.

Of course, Goldman had its own angle on the beer—a commodities shuffle to make money off the 90 billion aluminum cans consumed in the US each year.

Khan & Vaheesan are right to focus on finance as the key driver in the transformation. Gautam Mukanda has explored how leaders in the sector have enforced a short-term, extractionist mindset on US industry:

Pressure to reduce assets made Sara Lee, for example, shift from manufacturing clothing and food to brand management. Sara Lee’s CEO explained, “Wall Street can wipe you out. They are the rule-setters…and they have decided to give premiums to companies that harbor the most profits for the least assets.” In the pursuit of higher stock returns, many electronics companies have, like Boeing and Sara Lee, outsourced their manufacturing, even though tightly integrating R&D and manufacturing is crucial to innovation.

Clayton Christensen argues that management’s adoption of Wall Street’s preferred metrics has hindered innovation. Scholars and executives alike have criticized Wall Street not only for promoting short-term thinking but for sacrificing the interests of employees and customers to benefit shareholders and for encouraging dishonesty from executives who feel they’re being asked to meet impossible demands.

Considered in this light, it’s no wonder Wall Street & its enablers are trying so hard to hide the terms of its deals with states. We’ll need to look elsewhere for economic leadership.